It would be interesting to know why you would choose this over something like the Contiki uIP or lwIP that everything seems to use.
RealityVoid 2 days ago [-]
Not sure if they do for _this_ package, but the Wolf* people's model is usually selling certification packages so you can put their things in stuff that need certifications and you offload liability. You also get people that wrote it and that you can pay for support. I kind of like them, had a short project where I had to call on them for getting their WolfSSL to work with a ATECC508 device and it was pretty good support from them.
jpfr 2 days ago [-]
As the project is GPL’ed I guess they sell a commercial version. GPL is toxic for embedded commercial software. But it can be good marketing to sell the commercial version.
Edit: I meant commercial license
anthonj 1 days ago [-]
In my company we used their stuff often. They have an optional commercial license for basically all their products. The price was very reasonable as well.
RealityVoid 2 days ago [-]
I think they might sell a commercial version as well. It makes sense with the GPL. But I can't really recall that well.
cpach 1 days ago [-]
“GPL is toxic for embedded commercial software”
Why is that?
bobmcnamara 1 days ago [-]
Many bare metal or RTOS systems consist of a handful of statically linked programs (one or two bootloaders and the main application), many companies would rather find a non-GPL library rather than open up the rest of the system's code. Sometimes a system contains proprietary code that may not be open sourced as well.
1718627440 5 hours ago [-]
In the embedded world you don't really sell software you sell devices with firmware. Unless the library OS is AGPL, it doesn't matter too much.
dietr1ch 1 days ago [-]
He probably meant viral or tried to make a deadly twist on virality
LoganDark 2 days ago [-]
You don't need a commercial version, many projects get away with selling just a commercial license to the same version. As long as they have the rights to relicense this works fine.
How does it deal with all the dynamic TCP buffering things where things may get quite large?
Ao7bei3s 2 days ago [-]
It has a fixed maximum number of concurrent sockets, and each socket has queues backed by per-socket fixed-size transmit and receive buffers (see `rxmem` and `txmem` in `struct tsocket`[1]). This is fine, because in TCP, each side advertises remaining buffer space via the window size header field [2] (possibly with its meaning modified by the window scale option during the initial handshake - see [3] & `struct PACKED tcp_opt_ws`), and possibly also how much it can maximally receive in one packet (via the MSS option on the initial handshake [4]; possibly modified by intermediary systems via MSS clamping). wolfip has unusually small buffer sizes, and hardcodes them via #define, and everything else (e.g. congestion control) is pretty rudimentary too, but otherwise it's pretty much the same as in a "normal" implementation.
Buffer size is the product of bandwidth and delay, so if communicating with something close, it can still go fast.
Had an illustration of this once when my then-employers IT dept set up the desktop IP phones to update from a TFTP server on the other continental land mass. Since TFTP only allows one outstanding packet, everyone outside head office had to wait a long time for their phone to update, while head office didn't see any issue.
If it's a memory-constrained embedded device you're sending a handful of bytes of telemetry every now and then, not aiming for gigabit speeds. In fact it couldn't get to megabit speeds even if it wanted to. For something like that this is perfect.
CyberDildonics 2 days ago [-]
Are there TCP/IP stacks out there in common use that are allocating memory all the time?
fulafel 2 days ago [-]
Yes, TCP is pretty hungry for buffers. The bandwidth*delay product can eat gigs of memory on a server. You have to be ready to retransmit anything that's in flight / haven't received the ack for yet.
nly 1 days ago [-]
The bandwidth delay product for a 10Gbps stream for a 300ms RTT theoretically only requires ~384MB
One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout
fulafel 22 hours ago [-]
We're up to hundreds of gbps per server, have been for some years now. Eg 400 gbps uses a lot even with much smaller avg rtt. That's not going ng to be one stream of course, but a zillion smaller streams still add up to the same reqs.
This is far from little embedded device territory of course. But still, latest wifi is closer to 10 than 1 gbps already.
Veserv 21 hours ago [-]
I do not understand the point you are trying to make. The person you replied to showed how to evaluate it with simple math.
400 Gb/s is 50 GB/s. RTT of 300 ms would only require 15 GB of buffers. That would not even run a regular old laptop out of memory let alone a server driving 400 Gb/s of traffic. That would be single-digit percents to possibly even sub-percent amounts of memory on such a server.
fulafel 14 hours ago [-]
I introduced the concept of bandwidth * delay product to the conversation...
The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"
We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.
We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.
CyberDildonics 1 days ago [-]
Needing memory doesn't have to mean allocating memory over and over. Memory allocation is expensive. If someone is doing that reusing memory is going to be by far the best optimization.
fulafel 22 hours ago [-]
Well, allocating and freeing according to need is reusing. Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.
CyberDildonics 4 hours ago [-]
Well, allocating and freeing according to need is reusing
That's a twisted definition. It seems like you're playing around with terms, but allocating memory from a heap allocator is obviously what people are talking about with "dynamic memory allocation". Reusing memory that has already been grabbed from an allocator is not reallocating memory. If you have a buffer and it works you don't need to do anything to reuse it.
Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.
If anything is allocating memory from the heap in a hot loop it will be a bottleneck.
Reusing buffers is not allocating memory dynamically.
wmf 2 days ago [-]
Packets and sockets have to be stored in memory somehow. If you have a fixed pool that you reuse it's basically a slab allocator.
CyberDildonics 2 days ago [-]
You need some memory but that doesn't mean you would constantly allocate memory. There is a big difference between a few allocations and allocating in a hot loop.
bobmcnamara 1 days ago [-]
Yes, it is pretty common.
However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.
2 days ago [-]
sedatk 2 days ago [-]
It only implements IPv4 which explains to a degree that why IPv6 isn't ubiquitous: it's costly to implement.
hrmtst93837 1 days ago [-]
If you want IPv6 without dynamic allocation you end up rewriting half the stack anyway so probably not what most embedded engineers are itching to spend budget on. The weird part is that a lot of edge gear will be stuck in legacy-v4 limbo just because nobody wants to own that porting slog which means "ubiquitous IPv6" will keep being a conference slide more than a reality.
notepad0x90 2 days ago [-]
It's just not worth it. the only thing keeping it alive is people being overly zealous over it. if the cost to implement is measured as '1', the cost to administer it is like '50'.
sedatk 2 days ago [-]
> the only thing keeping it alive is people being overly zealous over it
Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).
> the cost to administer it is like '50'.
I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.
notepad0x90 20 hours ago [-]
To start with it requires support v4 as a separate network, at least for internal networks, since many devices don't support ipv6 (I have several AP's, IoT devices,etc.. bought in recent years that are like that). Then the v4->v6 nat/gateway/proxy approaches don't work well for cases where reliability and performance are critical. You mentioned NAT, but lack of NAT means you have to configure firewall rules, many get a public ip by their ISP directly to the first device that connects to the ISP modem,exposing their device directly to the internet. Others need to do expose a lan service on devices (port forwarding) which is more painful with v6. DHCP works very simply, v6 addressing can be made simply too (especially with the v4 patterned addressing - forgot its name) but you have multiple types of v6 addresses, the only way to easily access resources with v6 is to use host names. with v4 you can just type an IP easily and access a resource. Same with troubleshooting, it's more painful because it is more complex, and it requires more learning by users, and if you have dual stack, that doesn't add to the management/admin burden, it multiplies it. It's easier to tcpdump and troubleshoot arp, dhcp and routing with v4 than it is ND,RA,anycast,linklocal,etc.. with v6.
For mobile connectivity, ipv4 works smoothly as well in my experience, but I don't know about your use case to form an opinion. I don't doubt IPv6 makes some things much easier to solve than ipv4. I am also not dismissing IPv6 as a pointless protocol, it does indeed solve lots of problems, but the problem it solves is largely for network administrators, even then you won't find a private network in a cloud provider with v6, for good reason too.
nicman23 2 days ago [-]
what. have you seen ipv4 block pricing?
notepad0x90 1 days ago [-]
there keep arising more solutions, public ip usage hasn't been increasing as it did in past decades either. most new device increase is on mobile where cgnat works ok.
toast0 2 days ago [-]
Eh. IPv6 is probably cheaper to run compared to running large scale CGNAT. It's well deployed in mobile and in areas without a lot of legacy IPv4 assignments. Most of the high traffic content networks support it, so if you're an eyeball network, you can shift costs away from CGNAT to IPv6. You still have to do both though.
Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.
notepad0x90 1 days ago [-]
it depends on who you're talking about but no disagreement with cost for ISPs. For endusers (including CSPs) it's another story.
Even on its own it's hard to support, but for most people they have to maintain a dual stack. v4 isn't going away entirely any time soon.
gnerd00 2 days ago [-]
my 15 year old Macbook does IPv6 and IPv4 effortlessly
notepad0x90 1 days ago [-]
that's great, but when you have a networking issue, you have to deal with two stacks for troubleshooting. it would be much less effort to use just ipv4.
You're not paying for IPv4 addresses I'm sure, so did ipv6 solve anything for you? This is why i meant by zealots keeping it alive. you use ipv6 for the principle of it, but tech is suppose to solve problems, not facilitate ideologies.
preisschild 22 hours ago [-]
> it would be much less effort to use just ipv4.
Or just use IPv6-only. Thats what I do.
Legacy ipv4 only services can be reached via DNS64/NAT64
notepad0x90 20 hours ago [-]
But that's slow, and it's one more thing you have to setup and that could fail. What is the benefit to me if I used ipv6 and those nat services? what if I run into a service that blocks those nat IPs because they generate lots of noise/spam since they allow anyone to proxy through their IP? Not only does it not benefit me, if this was commercial activity I was engaging in, it could lead to serious loss of money.
At the risk of more downvotes, I again ask, why? am I supposed to endure all this trouble so that IPv4 is cheaper for some corporation? even then, we've hit the plateau as far as end user adaption goes. and I'll continue to argue that using IPv6 is a serious security risk, if you just flip it on and forget about it. you have to actually learn how it works, and secure it properly. These are precious minutes of people's lives we're talking about, for the sake of some techno ideology. The billions and billions spent on ipv4 and no one in 2026 is claiming ipv4 shortage will cause outages anytime within the next decade or two.
My suggestion is to come up with a solution that doesn't require any changes to the IP stack or layer3 by end users. CGNAT is one approach, but there are spare fields in the IPv4 Header that could be used to indicate some other address extension to ipv4 (not an entire freaking replacement of the stack), or just a minor addition/octet that will solve the problem for the next century or so by adding an "area code" like value (ASN?).
preisschild 23 hours ago [-]
Matter (a smart home connectivity standard in use by many embedded devices) is using IPv6. Doesnt seem to be a problem there.
Edit: I meant commercial license
Why is that?
[1] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [2] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [3] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca... [4] https://github.com/wolfSSL/wolfip/blob/60444d869e8f451aa2dca...
Had an illustration of this once when my then-employers IT dept set up the desktop IP phones to update from a TFTP server on the other continental land mass. Since TFTP only allows one outstanding packet, everyone outside head office had to wait a long time for their phone to update, while head office didn't see any issue.
We're finally adding multithreading (https://bugs.passt.top/show_bug.cgi?id=13) these days.
One option is just to simply keep buffers small and fixed and disconnect blocked clients on write() after some timeout
This is far from little embedded device territory of course. But still, latest wifi is closer to 10 than 1 gbps already.
400 Gb/s is 50 GB/s. RTT of 300 ms would only require 15 GB of buffers. That would not even run a regular old laptop out of memory let alone a server driving 400 Gb/s of traffic. That would be single-digit percents to possibly even sub-percent amounts of memory on such a server.
The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"
We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.
We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.
That's a twisted definition. It seems like you're playing around with terms, but allocating memory from a heap allocator is obviously what people are talking about with "dynamic memory allocation". Reusing memory that has already been grabbed from an allocator is not reallocating memory. If you have a buffer and it works you don't need to do anything to reuse it.
Modern TCP perf is not bottlenecked by that. There's pools of recycled buffers that grow and shrink according to load etc.
If anything is allocating memory from the heap in a hot loop it will be a bottleneck.
Reusing buffers is not allocating memory dynamically.
However sometimes the buffers are pooled so buffer allocator contention only occurs within the network stack or within a particular nic.
Hard disagree. It turned out to be great for mobile connectivity and IoT (Matter + Thread).
> the cost to administer it is like '50'.
I'm not sure if that's true. It feels like less work to me because you don't need to worry about NAT or DHCP as much as you need with IPv4.
For mobile connectivity, ipv4 works smoothly as well in my experience, but I don't know about your use case to form an opinion. I don't doubt IPv6 makes some things much easier to solve than ipv4. I am also not dismissing IPv6 as a pointless protocol, it does indeed solve lots of problems, but the problem it solves is largely for network administrators, even then you won't find a private network in a cloud provider with v6, for good reason too.
Is it my favorite? No. Is it well supported? Not everywhere. Is it going to win, eventually? Probably, but maybe IPv8 will happen, in which case maybe they learn and it it has a 10 years to 50% of traffic instead of 30 years to 50% of traffic.
Even on its own it's hard to support, but for most people they have to maintain a dual stack. v4 isn't going away entirely any time soon.
You're not paying for IPv4 addresses I'm sure, so did ipv6 solve anything for you? This is why i meant by zealots keeping it alive. you use ipv6 for the principle of it, but tech is suppose to solve problems, not facilitate ideologies.
Or just use IPv6-only. Thats what I do.
Legacy ipv4 only services can be reached via DNS64/NAT64
At the risk of more downvotes, I again ask, why? am I supposed to endure all this trouble so that IPv4 is cheaper for some corporation? even then, we've hit the plateau as far as end user adaption goes. and I'll continue to argue that using IPv6 is a serious security risk, if you just flip it on and forget about it. you have to actually learn how it works, and secure it properly. These are precious minutes of people's lives we're talking about, for the sake of some techno ideology. The billions and billions spent on ipv4 and no one in 2026 is claiming ipv4 shortage will cause outages anytime within the next decade or two.
My suggestion is to come up with a solution that doesn't require any changes to the IP stack or layer3 by end users. CGNAT is one approach, but there are spare fields in the IPv4 Header that could be used to indicate some other address extension to ipv4 (not an entire freaking replacement of the stack), or just a minor addition/octet that will solve the problem for the next century or so by adding an "area code" like value (ASN?).