Skip to content

Conversation

@sgfn
Copy link
Member

@sgfn sgfn commented Jul 11, 2024

No description provided.

@sgfn sgfn requested review from LVala and mickel8 July 11, 2024 12:51
Copy link
Contributor

@mickel8 mickel8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work! 🚀

I added a couple of comments that should make the code a bit simpler 😉

end

@impl true
def handle_cast({:notification, {:forward, peer, obt}}, state) do
Copy link
Contributor

@mickel8 mickel8 Jul 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that using PubSub terms would fit better here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean just changing :forward to :subscribe, or have the entire information exchange happen through a pubsub?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can try with renamings at first 🤔

console.log("Received ICE candidate: " + payload.body);
if (pc.remoteDescription !== null) {
pc.addIceCandidate(candidate);
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should never happen as we send SDP offer immediately after generating it on the BE 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it appears I was doing something seriously wrong to get to the point where I needed to write this if

pc.onicecandidate = event => {
if (event.candidate == null) {
console.log("Gathering candidates complete");
forwardSdpAnswer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can push SDP answer immediately after generating it. WebRTC API was designed in a way where you can send every single message as soon as possible. So the flow should be like this:

  1. Receive offer
  2. Apply offer
  3. Create answer
  4. Apply answer
  5. Send answer to the other side
  6. Every time there is a new candidate push it to the other side

Candidates always have to refer to some mline from the SDP offer/answer so they cannot appear before creating local description

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this Chamber process. If I understand correctly, it only keeps track of peers that are ready and pending, and that's it?

If I were writing that, I would offload this responsibility to the PeerSupervisor which keeps track of the peer connections anyway using the registry. Just do PeerSupervisor.get_ready_pcs which simply iterates over PCs and asks them if they are ready. It may be just a tiny bit less performant, but removes a whole layer of abstraction and additional state that you have to keep consistent with the rest of the app.

I believe that the rest of the functionality (broadcast_pli, track notifications) could be also performed by the supervisor/registry, but I did not put much thought to it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see where you're coming from, however, I'd argue for keeping it as-is for now. If we ever choose to extend the demo to allow multiple chambers/rooms, we'd have to actively track the peers anyway. I don't believe the complexity cost of the current solution is so great that we necessarily have to rewrite that part just for the sake of simplicity

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you insist, although separate supervisors per room is a must if we ever extend the example, I would get rid of it anyways

@sgfn sgfn requested review from LVala and mickel8 July 15, 2024 13:43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you insist, although separate supervisors per room is a must if we ever extend the example, I would get rid of it anyways

alias Nexus.Peer.State
alias NexusWeb.PeerChannel

defmodule State do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this could be just a @type instead of a struct? It's not used anywhere outside of the Peer module and I did not notice you using the %State{state | ...} syntax much (which would ensure you only use valid fields) so I don't really see benefit of it being a struct. Or maybe the put_in from Bunch.Access validates the used fields in compile time?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although that's a nitpick so I guess you can leave it as it is.

@sgfn sgfn changed the title Nexus initial commit [Nexus] Nexus initial commit Jul 16, 2024
Copy link
Contributor

@mickel8 mickel8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of actionable items that we can continue in subsequent PRs:

  • we don't want to use Bunch as it introduces more complexity for people from the outside who just want to see how they can create a simple video room. put_in can also be used on structs i.e. put_in(struct.foo.bar, :baz)
  • Room shouldn't tell a peer what to do. It should only notify about events like peer added or peer removed
  • I would insist on flattening State in Peer module. Let's just have a single Peer module that defines Peer struct or Peer module that uses a plain map as its state
  • broadcast_pli is too aggressive. We should request a keyframe for a specific track. If ex_webrtc doesn't allow to do this right now, we need to make an issue in ex_webrtc and solve it at first
  • instead of using pli term let's use keyframe - it will be easier to understand for other people
  • let's replace notify function with specific calls like Peer.peer_added or Peer.peer_removed. This way API will be easier to read
  • do something similar with subscribing i.e. Peer.add_subscriber(new_peer, self()) or Peer.subscribe

At the end, the API should look more or less like this:

defmodule Peer do
  def start_link(args, opts)
  def apply_sdp_answer(id, answer_sdp) 
  def add_ice_candidate(id, body)
  def peer_added()
  def peer_removed()
  def add_subscriber()
  def request_keyframe()
  def registry_id(id)
end

Any thoughts?

@sgfn
Copy link
Member Author

sgfn commented Jul 17, 2024

IMO all valid concerns, will include them in the next iteration

@sgfn sgfn merged commit 080f439 into master Jul 17, 2024
@sgfn sgfn deleted the sgfn/nexus branch July 17, 2024 12:52
@mickel8 mickel8 mentioned this pull request Aug 1, 2024
63 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants