-
Couldn't load subscription status.
- Fork 17
I have a network stack #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Awesome! Yeah, I'd welcome contributions, especially for a networking stack. Pull request is fine too. As you probably noticed, I haven't used much of github's features yet -- my bad that you had to go this indirect route to contact me. Also, I think my email should be in the commit log if you cloned the repo -- you can use that to contact me too. |
|
Great! Now that I made this pull request the issues tab appears, so perhaps it didn't show up simply because there were no issues yet, not because it was private. |
|
Okay, I also enabled issue tracking for the repo. Before you start, it would make sense to talk about how you're planning to interface with the rest of the kernel (i.e., the specifics of the dependencies you'll have and that of the interface you're planning to expose). Feel free to ping me here or by email when you're ready. |
|
Also: was the pull request something you wanted to merge in (it's fine if it's something you'd potentially like/would want to eventually work on) or was it just a way to get in contact? |
|
Sure. So right now, I have it broken up into MANY (cargo) libraries. Off the top of my head there is:
Notably, outside of timeouts for TCP and RIP, none of this code spawns any threads--everything is callbacks based (with unboxed closures). I did this for a number for reasons. First, I felt that this was the abstraction upon which everything else could best be built on--spawning a thread is from a callback is near free, while spinning up a thread to just run a callback isn't. As proof, the part of TCP that currently works has both an aync API, and a capabilities/sockets/however-you-want-to-view-it synchronous API on top. Second, this means it should better mirror the workings of a network driver. Truth be told, other than tinkering a bit with https://github.com/pczarn/rustboot I have no kernelspace experience. I read a bunch on DMA drivers and whatnot, and it seemed that the card interrupting the kernel, which would in turn pass off the packet to a callback registered via the network-layer interface, would be a simple design to implement. The code as written then uses mostly libcollections and below, with the exception that the calbacks are stored in very global hashmaps riddled with locks. It seemed to be a nasty tradeoff that by forgoing, let's say, a thread per IP protocol or TCP connection, I had to maintain this global state--in essence, avoiding scheduling led me to write my own bare-bones coopertive scheduler. Here would be plan for integrating it: I like that my net stack is not only is the code modular, but that its modularity is enforced with many Cargo packages. I wonder if might be possible to get RustOS to be built with Cargo now that there are build scripts, and the flexible target specification. I imagine/hope that eventually the standard library and rustc will be built with cargo, so that whatever could be hacked together now will get better with time. After RustOS is built with cargo, I'd need it to use a more recent version of Rust, as my code makes heavy use of unboxed closures. The current work removing librustrt and consolidating the platform-dependent parts of Then, my net stack can be added as a Cargo dependency. I'm a bit confused on what parts of Finally, once everything can be built together (dependencies are met, etc), it should be very easy to actually hook it all up. Future work (not really specific to integrating with RustOS) would be making a network-layer trait analogous to my link-layer trait for IPv4 and IPv6. With Associated types I think the traits could be sub-traits of one trait with associated types for packets, addresses, etc. And making UDP. Also, once the consolidation of the os-specific parts of libstd is complete, implementing the synchronous sockets part of libstd should be totally doable. |
|
Haha, I figure running Servo is a fine long term goal, but one that's very far off and that I (or anybody else) couldn't work on anytime soon. Do what you want with it. :D |
|
Cool re the features and implementation. Yes, Regarding Cargo, I did some investigation awhile ago and found it too hard to do the custom rust lib build that I'm currently doing. There are also assembly and custom linker scripts that need special handling. This could be something look into. Newer rust version should be fine. For dependencies: it seems like your changes will require threading support, which I'm planning to work on next. Good to see that OS dependent parts are being consolidated -- could you point me to somewhere that describes the plan to do so in more detail? So, if you limited yourself to |
Regarding using the crates "behind the facade", and not
Yeah, using Cargo would definitely be cumbersome in the short term. I'd hope the devs would agree this sort of usecase ought to be accounted for, so we'd be in a good position to make feature requests.
I checked, and right now they actually temporarily got rid of
I actually think I won't require threading support at all. Some sort of Cell should suffice instead of a lock in the single-threaded case. Long term, it might make the most sense to have the network stack and scheduler work in tandem -- e.g. the scheduler could implement the various traits meant to store the callbacks for protocol handlers and connections. That said, since I won't have time until school is out in a month to do much, so you could have threading ready once I am.
https://github.com/rust-lang/rfcs/blob/master/text/0230-remove-runtime.md lays out the plan for consolidating system specific stuff. The thing to do seems to be to implement
Yeah, with the locks factored out into traits + impl with Cells, the |
|
Cool! Won't have much free time for a couple of weeks though -- will be able to do a deeper inspection and get to threading after that (and maybe wait for the upstream rust lib refactor to be complete too). |
|
Sounds like we will both have more free time around the same time---should work out nicely. |
713d574 to
8da6ec1
Compare
|
Ok, I'm ready to work on this a lot! Great job starting to integrate cargo. I figured with that in place, it might make sense to separate the source from the bundled deps, and also git lazy-static from github. (that is what is now in those three on my branch-but I need to test so don't merge yet). What do you think? |
|
Cool! I still to make a more thorough review of your net stack (should be able to get to it in the next few days). My preliminary thoughts on interfacing with a network card:
The interface on the other side is complicated by demultiplexing with UDP/TCP ports. I'll look more into your code here. Also, I fixed up your last commit in the pull request (Makefile's rust path needed to change and I squashed the last 2 commits into 1), and I've put it in master. |
|
Excellent, I'll update my repo. Do you go on the rust IRC (or another)? Might be easier to make plans in real time. |
|
Yeah, that works -- I'm using the nick |
|
Hmm, for some reason i can't built it again. Even old commits. |
|
Nevermind, had the wrong version of rust checked out. |
|
FYI I started porting to rust master, if you want to work on that at all. |
|
Is there a new feature that you need from master? (b/c waiting for next official release would have a few benefits) |
|
Well unboxed closures and associated types (assuming the latter is in decent shape) are generally nice to have. Also between both of our wanting to not use libstd (since we are implementing it), and the changes to liballoc, I believe we wouldn't need to patch rust at all. |
|
1.0 alpha is supposed to be January 9. So on one hand, that's coming very soon, on the other should be less churn after. |
|
Jan 9 is pretty close so I would do an upgrade before then only if it's blocking something now. |
|
I guess think of it as an upgrade in anticipation of Jan 9 :). E.g. it would be useful to make sure vanilla liballoc really works with those PRs merged. |
Scheduler calls thread function or pop_registers_and_iret
Please excuse the frivolity of this pull request. Without a public issue tracker, this was the only way I could send you a message.
I have written (for a class) a network stack in Rust, with the intention of adding it to an exokernel in Rust someday. You have the most actively-developed exokernel and, if the commit history is any indication, are currently adding networking support to it.
I would need to get permission from the Professor and my assignment partner to open source it, as it is a school assignment. But other than that I could start working on integrating the two in about a month, as that is when school gets out.