React Native Integration with Streaming API + LiveKit

Seamlessly integrate real-time avatar streaming in React Native with LiveKit

Welcome to React Native Integration!

This is a mobile-specific adaptation of the Streaming API, tailored for React Native applications.

Key Features

  • • It incorporates the same streaming capabilities using LiveKit, but adds guidance and modules for compatibility with mobile devices.
  • • Useful for integrating KonPro avatars into iOS or Android apps that need live, reactive video content.
  • • This guide demonstrates how to integrate KonPro's Streaming API with LiveKit in a React Native/Expo application to create real-time avatar streaming experiences.

Quick Start

Create a new Expo project and install necessary dependencies:

shell
# Create new Expo project
bunx create-expo-app konpro-livekit-demo
cd konpro-livekit-demo

# Install dependencies
bunx expo install @livekit/react-native react-native-webrtc react-native-safe-area-context
bun install -D @config-plugins/react-native-webrtc

Step-by-Step Implementation

1. Project Configuration

Update app.json for necessary permissions and plugins:

JSON

{
  "expo": {
    "plugins": [
      "@config-plugins/react-native-webrtc",
      [
        "expo-build-properties",
        {
          "ios": {
            "useFrameworks": "static"
          }
        }
      ]
    ],
    "ios": {
      "bitcode": false,
      "infoPlist": {
        "NSCameraUsageDescription": "For streaming video",
        "NSMicrophoneUsageDescription": "For streaming audio"
      }
    },
    "android": {
      "permissions": [
        "android.permission.CAMERA",
        "android.permission.RECORD_AUDIO"
      ]
    }
  }
}

2. API Configuration

Create variables for API configuration:

typescript
const API_CONFIG = {
  serverUrl: "https://api.konpro.ai",
  apiKey: "your_api_key_here",
};

3. State Management

Set up necessary state variables in your main component:

typescript
const [wsUrl, setWsUrl] = useState<string>("");
const [token, setToken] = useState<string>("");
const [sessionToken, setSessionToken] = useState<string>("");
const [sessionId, setSessionId] = useState<string>("");
const [connected, setConnected] = useState(false);
const [text, setText] = useState("");
const [loading, setLoading] = useState(false);
const [speaking, setSpeaking] = useState(false);

4. Session Creation Flow

4.1 Create Session Token

TypeScript
const getSessionToken = async () => {
  const response = await fetch(
    `${API_CONFIG.serverUrl}/v1/streaming.create_token`,
    {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${API_CONFIG.apiKey}`,
      },
    }
  );
  const data = await response.json();
  return data.data.session_token;
};

4.2 Create New Session

TypeScript
const createNewSession = async (sessionToken: string) => {
  const response = await fetch(`${API_CONFIG.serverUrl}/v1/streaming.new`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${sessionToken}`,
    },
    body: JSON.stringify({
      quality: "high",
      version: "v1",
      video_encoding: "H264",
    }),
  });
  const data = await response.json();
  return data.data;
};

4.3 Start Streaming Session

TypeScript
const startStreamingSession = async (
  sessionId: string,
  sessionToken: string
) => {
  const response = await fetch(`${API_CONFIG.serverUrl}/v1/streaming.start`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${sessionToken}`,
    },
    body: JSON.stringify({
      session_id: sessionId,
      session_token: sessionToken,
      silence_response: "false",
      stt_language: "en",
    }),
  });
  const data = await response.json();
  return data.data;
};

5. LiveKit Room Setup

Embed LiveKit's LiveKitRoom in your component:

TypeScript

<LiveKitRoom
  serverUrl={wsUrl}
  token={token}
  connect={true}
  options={{
    adaptiveStream: { pixelDensity: "screen" },
  }}
  audio={false}
  video={false}
>
  <RoomView
    onSendText={sendText}
    text={text}
    onTextChange={setText}
    speaking={speaking}
    onClose={closeSession}
    loading={loading}
  />
</LiveKitRoom>

6. Video Track Component

Implement a RoomView component to display video streams:

TypeScript

const RoomView = ({
  onSendText,
  text,
  onTextChange,
  speaking,
  onClose,
  loading,
}: RoomViewProps) => {
  const tracks = useTracks([Track.Source.Camera], { onlySubscribed: true });

  return (
    <SafeAreaView style={styles.container}>
      <KeyboardAvoidingView
        behavior={Platform.OS === "ios" ? "padding" : "height"}
        style={styles.container}
      >
        <View style={styles.videoContainer}>
          {tracks.map((track, idx) =>
            isTrackReference(track) ? (
              <VideoTrack
                key={idx}
                style={styles.videoView}
                trackRef={track}
                objectFit="contain"
              />
            ) : null
          )}
        </View>
        {/* Controls */}
      </KeyboardAvoidingView>
    </SafeAreaView>
  );
};

7. Send Text to Avatar

TypeScript

const sendText = async () => {
  try {
    setSpeaking(true);
    const response = await fetch(`${API_CONFIG.serverUrl}/v1/streaming.task`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${sessionToken}`,
      },
      body: JSON.stringify({
        session_id: sessionId,
        text: text,
        task_type: "talk",
      }),
    });
    const data = await response.json();
    setText("");
  } catch (error) {
    console.error("Error sending text:", error);
  } finally {
    setSpeaking(false);
  }
};

8. Close Session

TypeScript

const closeSession = async () => {
  try {
    setLoading(true);
    const response = await fetch(`${API_CONFIG.serverUrl}/v1/streaming.stop`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${sessionToken}`,
      },
      body: JSON.stringify({
        session_id: sessionId,
      }),
    });

    // Reset states
    setConnected(false);
    setSessionId("");
    setSessionToken("");
    setWsUrl("");
    setToken("");
    setText("");
    setSpeaking(false);
  } catch (error) {
    console.error("Error closing session:", error);
  } finally {
    setLoading(false);
  }
};

Running the App

shell
# Install dependencies
bun install

# Create development build
expo prebuild

# Run on iOS
expo run:ios
# or with physical device
expo run:ios --device

# Run on Android
expo run:android
⚠️

Note

Use physical devices or simulators for WebRTC support. You can't use Expo Go with WebRTC.

Use React Native Debugger for network inspection

System Flow

  1. Session setup (steps 1-3)
  2. Video streaming (step 4)
  3. Avatar interaction loop (step 5)
  4. Session closure (step 6)

Complete Demo Code

For a full example, check the complete code in App.tsx. The implementation includes:

Key Implementation Features

  • • Complete React Native component with state management
  • • KonPro API integration with session handling
  • • LiveKit room setup and video streaming
  • • WebSocket connection for real-time communication
  • • Mobile-optimized UI with SafeAreaView and KeyboardAvoidingView
  • • Error handling and loading states
  • • Platform-specific styling and behavior

Note: The complete demo code contains extensive TypeScript/React Native implementation. For production use, we recommend using the official KonPro Streaming SDK package which provides a more robust and maintainable solution.

You can find the complete demo repository here.

Conclusion

You've now integrated KonPro's Streaming API with LiveKit in your React Native project. This setup enables real-time interactive avatar experiences with minimal effort, providing a solid foundation for building engaging mobile applications with AI-powered avatars.

Table of Contents