Migrating from io-net#

The previous generation of the QNX Neutrino networking stack (io-net) was designed based on a modular approach. It served its purpose in the past by allowing users to separate protocols and drivers, but this came at the expense of incurring a significant amount of overhead when converting to a particular protocol's domain from io-net and vice versa. The new networking stack (io-pkt) is designed to follow as closely as possible the NetBSD networking stack code base and architecture. This design provides the following benefits:

  1. Better performance
  2. Easier porting of the NetBSD stack code to QNX
  3. Updated feature set allowing better socket layer application portabilty
  4. Increased leverage of the NetBSD code base including drivers
  5. Far richer stack feature set drawing on the latest in improvements from the NetBSD code base
  6. 802.11 WiFi client and access point capability

Significant changes that have been implemented in io-pkt #

The io-pkt implementation made significant changes to the QNX Neutrino stack architecture.

        # sysctl -a | grep do_loopback_cksum
        net.inet.ip.do_loopback_cksum = 0
        net.inet.tcp.do_loopback_cksum = 0
        net.inet.udp.do_loopback_cksum = 0

Benefits of new architecture #



The new architecture provides several benefits that you should be aware of:

High-level components#

Three major software components constitute io-pkt implementation. These are core components, network drivers and applications / daemons. Each can be individually categorized as follows:

What components you get with io-pkt#

Core components #

Core components are software entities which are only compatible with io-pkt. These can include applications which interface directly with the stack using APIs which are different from those used by io-net.

Applications, services and libraries#

The applications category contains items that interface to the stack through the socket library and are therefore not directly dependent on the Core components. This means that they use the standard BSD socket interfaces (BSD socket API, Routing Socket, PF_KEY, raw socket).

Network Drivers#

There are three forms of drivers available with io-pkt. These are

Native io-pkt drivers are distinguished from io-net drivers by prefixing the drivers by a naming convention (io-net drivers are named devn-xxxxxx.so and io-pkt native drivers are named devnp-xxxxxx.so). Native drivers are written specifically for the io-pkt stack and as such are fully featured and high performance, multi-threaded capability. NetBSD drivers are not as tightly integrated into the overall stack. In the BSD operating system, these drivers are operating with interrupts disabled and, as such, generally have fewer mutexing issues to deal with on the transmit and receive path. With a “straight port” of a BSD driver, the stack will default to a single threaded model in order to prevent possible transmit and receive synchronization issues with simultaneous execution. If the driver has been carefully analyzed and proper synchronization techniques applied, then a flag can be flipped during the driver attach saying that the multi-threaded operation is allowed. Note that one driver operating in single threaded mode means that all drivers operate in single threaded mode. The shim layer provides binary compatibility with existing io-net drivers. As such, these drivers will also not be as tightly integrated into the stack. Features such as dynamically setting media options or jumbo packets for example aren’t supported for these drivers and, given that the driver operates within the io-net design context, the drivers will not be as performant as a native one. In addition to the packet receive / transmit device drivers, device drivers are also available that integrate hardware crypto acceleration functionality directly into the stack.

For a list of supported hardware please see the supported hardware list on the Networking project wiki page.

io-net and io-pkt interoperability and compatibility#

This section describes the compatibility between io-net and io-pkt. Note that io-pkt and the new utilities and daemons are not backwards compatible with io-net.

Installation compatibility#

Both io-net and io-pkt can co-exist on the same system. The updated socket library provided with io-pkt is compatible with io-net. This lets you run both io-net and io-pkt simultaneously. Note that the reverse is not true. If you use the io-net version of the socket library with io-pkt, unresolved symbols will occur when you attempt to use the io-pkt configuration utilities (e.g. ifconfig). The following binaries are duplicated when io-pkt is installed:

Binary Compatibility#

The following replaced binaries are known to have compatibility issues with io-net. Essentially, new utilities are likely to contain enhanced features not supported by the old stack.

The following io-net binaries are known to have compatibility issues with io-pkt

Socket compatibility #

Protocol compatibility#

Behavioural differences#

Simultaneous support#

Both io-net and io-pkt may be run simultaneously on the same target if the relevant utilities / daemons for each stack are Here are the specific issues you should be aware of:

      io-pkt -d pcnet pci=0 
      io-net -i1 -dpcnet pci=1 -ptcpip prefix=/boo 
Note that, in this case, that io-net versions of the utilities have to be present on the target (assumed, for this example, to have been placed in a separate directory) and run with the io-net stack. e.g.
   SOCK=/boo /io-net/ifconfig en0 192.168.1.2
   SOCK=/boo /io-net/inetd   

Functionality no longer supported#

Discontinued Features#

The current version of the networking stack has deprecated the support for the following features: