[null,null,["最后更新时间 (UTC):2025-08-29。"],[],[],null,["# Manage virtual media streams in Meet Media API\n\n| **Developer Preview:** Available as part of the [Google Workspace Developer Preview Program](https://developers.google.com/workspace/preview), which grants early access to certain features. \n|\n| **To use the Meet Media API to access real-time media from a conference, the Google Cloud project, OAuth principal, and all participants in the conference must be enrolled in the Developer Preview Program.**\n\nVirtual Media Streams, in the context of WebRTC conferencing, are media streams\ngenerated by a Selective Forwarding Unit (SFU) to aggregate and distribute media\nfrom multiple participants. Unlike direct peer-to-peer media streams, which\nwould create a complex mesh of connections in large conferences, virtual media\nstreams simplify the topology. The SFU receives individual media streams from\neach participant and selectively forwards the active or relevant streams to\nother participants, multiplexing them onto a smaller, fixed set of outgoing\nvirtual media streams.\n\nThis approach reduces the number of simultaneous\nincoming streams each participant needs to handle, lowering processing and\nbandwidth requirements. Each virtual stream can contain media from one\nparticipant at a time, dynamically adjusted by the SFU based on factors like\nspeaker activity or video assignment. Participants receive these virtual\nstreams, effectively seeing a composed view of the conference without needing to\nmanage individual streams from every other participant. This abstraction\nprovided by virtual media streams is crucial for scaling WebRTC conferences to a\nlarge number of participants.\n\nTo receive audio, the client must [offer](/workspace/meet/media-api/guides/concepts#flow)\nexactly three audio media descriptions, creating three local audio\n[transceivers](/workspace/meet/media-api/guides/overview#rtp-transceiver). To receive\nvideo, the client must offer one to three video media descriptions, establishing\nthat number of video transceivers.\n\nReceivers\n---------\n\nEach client-owned transceiver has a dedicated\n[`RtpReceiver`](/workspace/meet/media-api/guides/overview#rtp-transceiver) and a dedicated\n\"media track\" that receives the audio RTP streams from Meet\nservers.\n\nEach track has a unique ID and receives its own distinct stream of RTP packets\nfrom that specific media source. For example, *Track A* might receive audio from\n`production-1` while *Track B* receives audio from `production-2`.\n\nSSRCs\n-----\n\nEach RTP packet has a [Synchronization Source\n(SSRC)](/workspace/meet/media-api/guides/overview#ssrc) header value, tying it to a\nspecific track.\n\nAudio sessions through the Meet Media API use three distinct media\nstreams, each having its own static SSRC. Once established, these SSRC values\nnever change for the life of the session.\n| **Note:** This Meet Media API behavior is **not typical** for WebRTC sessions, where the SSRC for a track might change if the input source (e.g. device) changes.\n\nVirtual streams\n---------------\n\nMeet Media API uses\n[Virtual Media Streams](/workspace/meet/media-api/guides/overview#vstreams). These are\nstatic throughout the session, but the source of the packets may change to\nreflect the\n[most relevant](/workspace/meet/media-api/guides/video-assignment#video-assignment-requests)\nfeeds. Virtual Media Streams behave the same for audio and video.\n\nThe [Contributing Source\n(CSRC)](/workspace/meet/media-api/guides/overview#csrc) in the RTP packet\nheaders identifies the *true* source of the RTP packets. Meet\nassigns each participant in a\n[conference](/workspace/meet/media-api/guides/overview#conference) their own unique CSRC\nwhen they join. This value remains constant until they leave.\n\nSince the number of SSRCs is constant throughout the Meet Media API\nsession, here are the three possible scenarios:\n\n1. **More participants than SSRCs available**:\n\n Meet transmits the three loudest people across the three\n SSRCs. Since each RTP stream is on its own dedicated SSRC, there's no\n intermixing between the streams.\n **Figure 1.** Meet transmits the three loudest people across the three SSRCs.\n\n If any of the original streams in the conference are no longer one of the\n loudest streams, Meet switches the RTP packets that make up\n the SSRC to the loudest.\n **Figure 2.** Meet switches the RTP packets to the new loudest person.\n2. **Number of active participants is less than the three audio SSRCs**:\n\n For the scenario where more SSRCs are available than there are streams in\n the conference, Meet maps any available audio packets to its\n own unique SSRC. Any unused SSRCs are still ready and available, but no RTP\n packets are transmitted.\n **Figure 3.** Meet maps available audio packets to its own unique SSRC.\n3. **Number of active participants equals the three audio SSRCs**:\n\n For the scenario of equal participants and available SSRCs, each\n participant's media is mapped to a dedicated SSRC. These mappings persist as\n long as this specific scenario persists.\n **Figure 4.** Meet maps each participant's media to a dedicated SSRC.\n\nRelated topics\n--------------\n\n- [Get started](/workspace/meet/media-api/guides/get-started)\n- [Manage video assignment in Meet Media API](/workspace/meet/media-api/guides/video-assignment)"]]