Feed for discussion Networking Drivers in project Networking. http://community.qnx.com/sf/discussion/do/listTopics/projects.networking/discussion.drivers Posts for Networking Drivers post121476: Support for TI18XX devnp-ti18xx_imx6x in QNX 7.1 http://community.qnx.com/sf/go/post121476 Hi all, recently I've updated a imx6-based system from QNX7.0 up to QNX7.1. We are using a board with wifi/bt chip from TI and is supported by devnp-ti18xx_imx6x.so driver. This is in QNX7.0 but not in QNX7.1 . Also the ti utilities are missing : wpa_passphrase_ti18xx,wpa_supplicant_ti18xx and tiwlan_cfg_ti18xx . Are these tools still availables on QNX7.1 ? Thanks, Mario Thu, 01 Jul 2021 12:50:36 GMT http://community.qnx.com/sf/go/post121476 mario sangalli 2021-07-01T12:50:36Z post121475: Re: Networking Response and pci-bios Image http://community.qnx.com/sf/go/post121475 Hi Aaron, I concur with Nick's assessment -- it's probably an incorrect IRQ assignment or an unfriendly shared IRQ. Given that you're reporting that it doesn't work with apic either, that's some evidence pointing to the idea that the operating system isn't getting the correct IRQ. I've had this happen when there were PCI-to-PCI bridges in the communications chain. You can use the output of pci -vv to determine if your card is behind such a bridge: there would be an entry "Class = Bridge (PCI/PCI)", and the "Secondary bus number" for the bridge entry would match the "Bus number" for your network controller. Here are some things you could try: - Manually override the interrupt assignment when launching the network driver with the 'irq=x' argument. It's lame, but you could just try all possible values (1-15) to see if there's one where you consistently get back your 0.3s ping. - Another way to find the correct IRQ: if the BIOS doesn't print out a page with the IRQ assignments, you could try and find out what the BIOS thought the IRQ should be by writing a little test program that manually invokes a PCI BIOS call to read the IRQ register on the card, and then run this program in your boot script before launching pci-bios. - Alternately, I can pass along a patch I wrote for pci-bios-v2 from the 6.6 x86 BSP that adds the ability to map IRQs properly through PCI-to-PCI bridges. If that is your problem, this patch might fix it for you. If the problem is an unfriendly shared IRQ, you can use 'pidin irqs' to find out what other drivers are active on your IRQ line, and then try shutting them off one by one until you find the cuplrit. You can sometimes influence IRQ sharing assignments by enabling or disabling various hardware in the BIOS, such as PS/2 mouse support or unused serial or ethernet ports. Hope this is of some help, -Will Tue, 29 Jun 2021 19:32:45 GMT http://community.qnx.com/sf/go/post121475 Will Miles 2021-06-29T19:32:45Z post121474: Re: Networking Response and pci-bios Image http://community.qnx.com/sf/go/post121474 > Given that it works well when using startup-apic and pci-bios-v2, I think your > problem is with the interrupts rather than the network driver itself. > > I suspect the wrong interrupt is being identified for the network card and you > are being saved by something else triggering the network driver code - > probably one of the other devices that is sharing that interrupt. > > Alternatively with shared interrupts, one of the other device drivers that is > sharing that interrupt isn't handling things properly - possibly masking and > then taking a long time to determine that it the interrupt isn't for it before > unmasking so that the network driver can see it. > > Regards, > Nick Thanks for the info. I suppose if it is an interrupt issue I don't have much, if any, control over how they are setup or handled. That would require bios code changes? Tue, 29 Jun 2021 17:44:58 GMT http://community.qnx.com/sf/go/post121474 Aaron Candy 2021-06-29T17:44:58Z post121473: Re: Networking Response and pci-bios Image http://community.qnx.com/sf/go/post121473 Given that it works well when using startup-apic and pci-bios-v2, I think your problem is with the interrupts rather than the network driver itself. I suspect the wrong interrupt is being identified for the network card and you are being saved by something else triggering the network driver code - probably one of the other devices that is sharing that interrupt. Alternatively with shared interrupts, one of the other device drivers that is sharing that interrupt isn't handling things properly - possibly masking and then taking a long time to determine that it the interrupt isn't for it before unmasking so that the network driver can see it. Regards, Nick Tue, 29 Jun 2021 17:12:26 GMT http://community.qnx.com/sf/go/post121473 Nick Reilly 2021-06-29T17:12:26Z post121472: Networking Response and pci-bios Image http://community.qnx.com/sf/go/post121472 We've created a QNX 6.6 image to use with an axiomtek mano500 board Despite a BSP exiting for the mano500 board, we've had better success with the generic QN 6.6 BSP. I've taken over this project from the original developer and understand enough to make things works, but I still wouldn't consider myself an expert, so I appreciate your patience. I'm having problems getting the network response to the same level as other machines on the network. On a local network the ping response times to the QNX 6.6 machine are in the range of 5 - 45 ms. Pinging a QNX 6.3, or any other device on the network the response time is less than 0.3 ms. The slow response seen in ping is also evident in other network applications on QNX 6.6, ping is just an easy way of demonstrating the issue. The QNX 6.6 image with the slow response time is being created using startup-bios and pci-bios, using the devnp-e1000 driver. If I use startup-apic and pci-bios-v2 the ping response time matches what is seen with all the other devices on the network, less than 0.3 ms. However, the computer must use a particular PCI card which doesn't work properly when using startup-apic and pci-bios-v2, so we are forced to use startup-bios and pci-bios. I also notice that when using startup-bios and pci-bios all the interrupts for the (PCI) devices are shared, which is not the case for startup-apic and pci-bios-v2. I have a feeling that the slow respose time might be related to the shared interrupts, but I'm not knowledgable enough to know if that is true, or what to do about it. I'm not sure what other relevant information I can add. So, is there anything I can do or try to get better performance from the networking on QNX 6.6? Or am I just out of luck? Thanks Tue, 29 Jun 2021 17:01:34 GMT http://community.qnx.com/sf/go/post121472 Aaron Candy 2021-06-29T17:01:34Z post121443: QNX 6.6.0 Network Driver for Intel Kaby Lake chipset (i219LM) http://community.qnx.com/sf/go/post121443 Is the network driver for QNX 6.6.0+SP1 Intel chip i219LM available or is it planned to be updated? Tue, 01 Jun 2021 23:04:42 GMT http://community.qnx.com/sf/go/post121443 Janusz Ruszel 2021-06-01T23:04:42Z post121417: Retrieving Network statistics similar to nicinfo in c++ http://community.qnx.com/sf/go/post121417 I was looking for a way to query all the info available with the nicinfo utility directly into c++. Currently I am using the popen command to parse the text from nicinfo. I was hoping there was an alternative way to do this? I see that a nic_stats_t structure is available "qnx.com/developers/docs/6.3.2/ddk_en/network/nic_stats_t.html". This suggests that I would use devctl() with DCMD_IO_NET_GET_STATS to retrieve this information, however this leaves me with a few more questions: 1. What file descriptor do I pass to devctl? I do not see my Network devices in the "/dev/..." directory. 2. What includes do I need to achieve the mentioned functionality. 3. Are there any c++ examples I could look at to help me along? I tried to find the nicinfo source using the svn checkout method however I do not seem to have access to that. Thanks Mon, 17 May 2021 12:42:47 GMT http://community.qnx.com/sf/go/post121417 Narshil Vaghjiano 2021-05-17T12:42:47Z post121377: tcpdump http://community.qnx.com/sf/go/post121377 How to run tcpdump on QNX? I am unable to run it directly. Tue, 13 Apr 2021 03:50:19 GMT http://community.qnx.com/sf/go/post121377 Chitra Prabhu 2021-04-13T03:50:19Z post121376: Re: promiscous mode QNX 6.4 http://community.qnx.com/sf/go/post121376 Found it ... able to set it when I listen on an interface using libpcap Tue, 13 Apr 2021 03:48:32 GMT http://community.qnx.com/sf/go/post121376 Chitra Prabhu 2021-04-13T03:48:32Z post121369: Re: promiscous mode QNX 6.4 http://community.qnx.com/sf/go/post121369 Which network driver are you running? You can try adding "promiscuous" to the driver command line when you start the driver. On 2021-04-06, 8:27 AM, "Chitra Prabhu" <community-noreply@qnx.com> wrote: promiscous mode QNX 6.4-> How to enable promiscous mode ? en0 promiscous mode is OFF. I tried ifconfig it does not work. What is a good way to do this ? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post121368 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 06 Apr 2021 12:35:11 GMT http://community.qnx.com/sf/go/post121369 Hugh Brown 2021-04-06T12:35:11Z post121368: promiscous mode QNX 6.4 http://community.qnx.com/sf/go/post121368 promiscous mode QNX 6.4-> How to enable promiscous mode ? en0 promiscous mode is OFF. I tried ifconfig it does not work. What is a good way to do this ? Tue, 06 Apr 2021 12:27:35 GMT http://community.qnx.com/sf/go/post121368 Chitra Prabhu 2021-04-06T12:27:35Z post121336: WI FI driver template http://community.qnx.com/sf/go/post121336 Hi, I'm search for wifi driver running on imx6/ QNX6.6. It seems quite impossible to have any support from QNX, also in binary form. I've use the only driver availabe by qnx, for TwLink8 WL1835 (SDIO) and its works ok, but we need to test some other modules based on broadcom chips. If a source template for a WORKING wi.fi driver is available maybe I can try to port the linux one , that is available in source : I do not want to start from scratch, if it is possible ; Thanks Mario Fri, 19 Mar 2021 11:02:25 GMT http://community.qnx.com/sf/go/post121336 mario sangalli 2021-03-19T11:02:25Z post121146: QNX 7.0 - devnp-e1000.so driver displayed error, using inetl 82574L Gbit ethernet card. http://community.qnx.com/sf/go/post121146 Hi, devnp-e1000.so driver displayed error message, when inetl 82574L Gigabit ethernet controller was loaded by it. the system is : CPU : Intel core i7 Ethernet : Intel 82574L Gigabit Ethernet Controller QNX version : QNX 7.0 Driver : devnp-e1000.so Error message : #io-pkt-v4-hc -i2 -ptcpip prefix=/alt2 # mount -T io-pkt2 -o did=0x10d3,pci=21 /lib/dll/devnp-e1000.so mount: Can't mount / (type io-pkt2) mount: Possible reason: No such device # I had searched lots information in Foundry27, but I can not get correct information. I hope someone can tell me solution. Wed, 09 Dec 2020 08:40:15 GMT http://community.qnx.com/sf/go/post121146 Jeongsik Im 2020-12-09T08:40:15Z post120764: Re: Ethernet errors detection http://community.qnx.com/sf/go/post120764 See net/ifdrvcom.h and hw/nicinfo.h You will need to look at the valid_stats field in nic_ethernet_stats_t to determine which stats the driver supports. If that doesn't have everything you need then yes, you will need to contact the driver vendor to add support. Thu, 11 Jun 2020 16:45:06 GMT http://community.qnx.com/sf/go/post120764 Nick Reilly 2020-06-11T16:45:06Z post120758: Re: Ethernet errors detection http://community.qnx.com/sf/go/post120758 Thanks a lot for response! > Take a look at the output of the "nicinfo" command, this reports the Ethernet > stats. Note that this is entirely driver dependent - different drivers report > different stats. > > The code for this is: > struct drvcom_stats dstats; > nic_stats_t *statp; > > statp = &dstats.dcom_stats; > > ifdcp->ifdc_cmd = DRVCOM_STATS; > ifdcp->ifdc_len = sizeof(*statp); > if ((ret = devctl(s, SIOCGDRVCOM, ifdcp, > sizeof(struct drvcom_stats), NULL)) == EOK) { > dump_stats(s, statp); > } > Could you tell me what header files do I need for this code? Also, as I understand, do we need to reach driver suppliers in order to extend information about Ethernet interface? Thu, 11 Jun 2020 10:00:52 GMT http://community.qnx.com/sf/go/post120758 Furkat Mallabaev 2020-06-11T10:00:52Z post120748: DF bit and UDP checksum validation per socket - IPv4 http://community.qnx.com/sf/go/post120748 Hello, I have two questions: 1) Is there any way to set/unset IP Don't Fragment bit (DF bit) per socket (e.g. using setsockopt() ) in QNX 7.0.0? In Linux it is possible via: setsockopt() with the following settings (IPPROTO_IP, IP_MTU_DISCOVER) and values IP_PMTUDISC_DO / IP_PMTUDISC_DONT (sets / unsets DF bit) 2) Is there any way to enable/disable transmission of UDP checksum per socket in QNX 7.0.0? In Linux it is possible via: setsockopt() with the following settings (SOL_SOCKET, SO_NO_CHECK) and values 1/0 (disables/enables checksum) So far I have not found any alternatives in QNX system for the above Linux solution. Thank you very much for the help. Tue, 09 Jun 2020 13:56:36 GMT http://community.qnx.com/sf/go/post120748 Jaroslaw Kuraszewicz 2020-06-09T13:56:36Z post120728: Re: Ethernet errors detection http://community.qnx.com/sf/go/post120728 Take a look at the output of the "nicinfo" command, this reports the Ethernet stats. Note that this is entirely driver dependent - different drivers report different stats. The code for this is: struct drvcom_stats dstats; nic_stats_t *statp; statp = &dstats.dcom_stats; ifdcp->ifdc_cmd = DRVCOM_STATS; ifdcp->ifdc_len = sizeof(*statp); if ((ret = devctl(s, SIOCGDRVCOM, ifdcp, sizeof(struct drvcom_stats), NULL)) == EOK) { dump_stats(s, statp); } Link state is a separate API: #include <errno.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/socket.h> #include <sys/ioctl.h> #include <netinet/in.h> #include <net/if.h> int main(int argc, char **argv) { int s; struct ifdatareq ifdata; if (argc != 2) { fprintf(stderr, "Incorrect usage\n"); return 1; } s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); if (s == -1 && errno != EPROTONOSUPPORT) { fprintf(stderr, "can't open IPv4 socket %s", strerror(errno)); return 1; } memset(&ifdata, 0, sizeof(ifdata)); strlcpy(ifdata.ifdr_name, argv[1], IF_NAMESIZE); if (ioctl(s, SIOCGIFDATA, &ifdata) == -1) { fprintf(stderr, "ioctl failed: %s\n", strerror(errno)); exit(1); } fprintf(stdout, "Linkstate %d\n", ifdata.ifdr_data.ifi_link_state); return 0; } Thu, 04 Jun 2020 13:14:31 GMT http://community.qnx.com/sf/go/post120728 Nick Reilly 2020-06-04T13:14:31Z post120727: Ethernet errors detection http://community.qnx.com/sf/go/post120727 Hello, I need a way to detect any ethernet errors (e.g. crc check failed, cable unplugged, etc). Is there any api to obtain these errors programmatically? I tried to investigate io-pkt, but this tool is only allows to write and launch own drivers, but I couldn't find api to check health status of those drivers. Thank you. Thu, 04 Jun 2020 09:45:41 GMT http://community.qnx.com/sf/go/post120727 Furkat Mallabaev 2020-06-04T09:45:41Z post120458: Re: devn-e1000.so or devn-82579LM.so for QNX 6.3.2 http://community.qnx.com/sf/go/post120458 You will have to request a special build of this driver through your local QNX contact, as this is not something that we have readily available. On 2020-04-15, 8:35 AM, "Murad Sultanzadeh" <community-noreply@qnx.com> wrote: I needed devn-e1000.so driver for QNX 6.3.2 (or devn-82579LM.so) Chip name Intel I-210IT and Intel 82579LM. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post120457 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 15 Apr 2020 12:39:56 GMT http://community.qnx.com/sf/go/post120458 Hugh Brown 2020-04-15T12:39:56Z post120457: devn-e1000.so or devn-82579LM.so for QNX 6.3.2 http://community.qnx.com/sf/go/post120457 I needed devn-e1000.so driver for QNX 6.3.2 (or devn-82579LM.so) Chip name Intel I-210IT and Intel 82579LM. Wed, 15 Apr 2020 12:35:26 GMT http://community.qnx.com/sf/go/post120457 Murad Sultanzadeh(deleted) 2020-04-15T12:35:26Z post120384: Change packet size in the pfil hook http://community.qnx.com/sf/go/post120384 Hi, does anybody knows is it possible to change tcp/udp packet size inside the packet filter hook? I'm trying to add my data to the end of tcp payload, I use m_append function and as I see the mbuf is really contains my data after appending, but the server receives the original sized package without my data. Also I tried to allocate a separate mbuf with my additional data and then to concatenate it with original buffer use m_cat, but it also didn't work. Is it even possible? Fri, 20 Mar 2020 13:32:59 GMT http://community.qnx.com/sf/go/post120384 Yona Shaposhnik 2020-03-20T13:32:59Z post120292: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120292 Thanks for your prompt response. Tue, 03 Mar 2020 15:28:31 GMT http://community.qnx.com/sf/go/post120292 Liu Yongfeng 2020-03-03T15:28:31Z post120291: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120291 It was added to io-pkt after 6.6.0 was released. Please contact your QNX support and ask them for the patch containing the fix for Issue ID 2599867 Tue, 03 Mar 2020 15:26:40 GMT http://community.qnx.com/sf/go/post120291 Nick Reilly 2020-03-03T15:26:40Z post120290: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120290 Hello Reilly, I upgraded my system to 6.6.0, but I can't find the "-D" option for io-pkt. So, is the improvement you mentioned not published ? How can I get it? Tue, 03 Mar 2020 14:55:26 GMT http://community.qnx.com/sf/go/post120290 Liu Yongfeng 2020-03-03T14:55:26Z post120278: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120278 Thank you again for your response. This helps a lot. Thu, 27 Feb 2020 15:44:41 GMT http://community.qnx.com/sf/go/post120278 Liu Yongfeng 2020-02-27T15:44:41Z post120274: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120274 Yes, this is the design of io-pkt in 6.5.0. We have improved it in later versions. Thu, 27 Feb 2020 13:54:09 GMT http://community.qnx.com/sf/go/post120274 Nick Reilly 2020-02-27T13:54:09Z post120273: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120273 Hello Reilly, Thank you very much for your prompt reply. I have to correct my statement on the testing steps in my initial post, that is, before we did flood test on one interface, we pulled out cable of the other. I investigated the tracelog carefully, and now I can confirm that, for wm1, both "io-pkt#00" and "io-pkt#01" can freely switch between "stack context" processing thread and "Interrupt worker" thread, since we only have two worker threads, there is only one "interrupt worker" thread at any time, but both threads are possible to be the one and only "interrupt worker" thread, just as the document says. But for wm0, "io-pkt#00" is always the "interrupt worker" thread and "stack context" thread, and "io-pkt#01" does NOTHING, it is always in idle state even when application socket call is blocked waiting for a io-pkt thread to service. So, does the design of io-pkt in 6.5.0SP1 might cause this situation to happen ? Thanks and regards Thu, 27 Feb 2020 07:58:22 GMT http://community.qnx.com/sf/go/post120273 Liu Yongfeng 2020-02-27T07:58:22Z post120271: Re: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120271 io-pkt does most of its processing in "Stack Context", this is all the protocol handling plus the resource manager. In addition it does "Interrupt Worker" processing, this is usually the driver receive processing where it manages the Rx descriptor ring and then sends the packet to the actual protocol handling code - once the EtherType is processed the packet is put on the appropriate protocol queue for the "Stack Context" to handle. In your case, "Stack Context" processing is happening on "io-pkt #0x00", wm0 Rx "Interrupt Worker" processing is also happening on "io-pkt #0x00" and wm1 "Interrupt Worker" processing is happening on "io-pkt #0x01". Unfortunately you are running 6.5.0, in later versions we have recently added an option to io-pkt to address your scenario: -D - Run the resource manager/protocol layer stack context in a dedicated posix thread. By default this is off. This option can offer a performance improvement if you are sending and receiving TCP/IP traffic from applications on a multi-core system. This was added to 6.6.0 and later with Issue ID 2599867 I suggest you contact your QNX support representative to see if you can move to 6.6.0 or later, or to see if you can get this issue added to 6.5.0 Wed, 26 Feb 2020 17:46:18 GMT http://community.qnx.com/sf/go/post120271 Nick Reilly 2020-02-26T17:46:18Z post120270: Why io-pkt acts single-threaded on one interface, but multi-threaded on the other interface? http://community.qnx.com/sf/go/post120270 Altera Cyclone V SoC, dual core, Cortex-A9 architecture, QNX 6.5.0SP1, the network driver snpsmac3504 supports two interfaces on the SoC, named wm0 and wm1. io-pkt is started with "io-pkt-v4 -ptcpip -d snpsmac3504 name=wm". We can see three threads, "io-pkt main" ,"io-pkt #0x00" and "io-pkt #0x01". When we send flood packets to wm0, with little traffic on wm1, "io-pkt #00" consumes nearly 100% CPU time of one CPU core, and "io-pkt #01" is ALWAYS IDLE, whenever there is flood traffic on wm0 or not. When we send flood packets to wm1, with little traffic on wm0, both "io-pkt #00" and "io-pkt #01" consumes nearly 100% CPU time of two CPU cores, which is as expected, because we use short valid packets at a ultra high speed. There are applications in the system which send and receive packets from wm0/wm1 in non-blocking mode. For flood on wm0, we can find so many socket calls blocked for more than hundreds of milliseconds hand shaking with "io-pkt #00". But for flood on wm1, no blocking found for socket calls, sometimes "io-pkt #00" service the socket call, sometimes "io-pkt #01" services the socket call, which is perfect as expected for a multi-core system. But how to explain the single-thread behaviour for wm0? Thanks and regards Wed, 26 Feb 2020 16:09:59 GMT http://community.qnx.com/sf/go/post120270 Liu Yongfeng 2020-02-26T16:09:59Z post120261: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120261 My vm is x86 architecture: # uname -m x86pc and my lsm library has the same permissions as lsm-pf-v4.so does: # ls -l /lib/dll/lsm-libfilter.so -rwxrwxr-x 1 root root 7384 Feb 13 14:43 /lib/dll/lsm-libfilter.so # ls -l /lib/dll/lsm-pf-v4.so -rwxrwxr-x 1 root root 239208 Feb 28 2017 /lib/dll/lsm-pf-v4.so my code: #include <sys/types.h> #include <errno.h> #include <sys/param.h> #include <sys/conf.h> #include <sys/socket.h> #include <sys/mbuf.h> #include <net/if.h> #include <net/pfil.h> #include <netinet/in.h> #include <netinet/ip.h> #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <sys/slog.h> #include <sys/slogcodes.h> #include "sys/io-pkt.h" #include "nw_datastruct.h" static int input_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir, int fib) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR, "input"); return 0; } static int output_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir, int fib) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR, "output"); return 0; } static int deinit_module(void); static int iface_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir, int fib) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR,"Iface hook called ... "); if (m == (struct mbuf **)PFIL_IFNET_ATTACH) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR,"Interface attached\n"); } else if (m == (struct mbuf **)PFIL_IFNET_DETACH) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR,"Interface detached\n"); deinit_module(); } return 0; } static int ifacecfg_hook(void *arg, struct mbuf **m, struct ifnet *ifp, int dir, int fib) { slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR,"Iface cfg hook called with 0x%p\n", m); return 0; } static int deinit_module(void) { struct pfil_head *pfh_inet; pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); if (pfh_inet == NULL) { return ESRCH; } pfil_remove_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, pfh_inet); pfil_remove_hook(output_hook, NULL, PFIL_OUT | PFIL_WAITOK, pfh_inet); pfh_inet = pfil_head_get(PFIL_TYPE_IFNET, 0); if (pfh_inet == NULL) { return ESRCH; } pfil_remove_hook(ifacecfg_hook, NULL, PFIL_IFNET, pfh_inet); pfil_remove_hook(iface_hook, NULL, PFIL_IFNET | PFIL_WAITOK, pfh_inet); slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR, "Unloaded pfil hook\n"); return 0; } int pfil_entry(void *dll_hdl, struct _iopkt_self *iopkt, char *options) { printf("pfil_entry+++++++++ enter\n"); struct pfil_head *pfh_inet; pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); if (pfh_inet == NULL) { return ESRCH; } pfil_add_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, pfh_inet); pfil_add_hook(output_hook, NULL, PFIL_OUT | PFIL_WAITOK, pfh_inet); pfh_inet = pfil_head_get(PFIL_TYPE_IFNET,0); if (pfh_inet == NULL) { return ESRCH; } pfil_add_hook(iface_hook, NULL, PFIL_IFNET, pfh_inet); pfil_add_hook(ifacecfg_hook, NULL, PFIL_IFADDR, pfh_inet); slogf(_SLOG_SETCODE(_SLOGC_TEST, 2), _SLOG_ERROR, "Loaded pfil hook\n"); return 0; } struct _iopkt_lsm_entry IOPKT_LSM_ENTRY_SYM(pfil) = IOPKT_LSM_ENTRY_SYM_INIT(pfil_entry); and Makefile: ARTIFACT = libfilter_1.so #Build architecture/variant string, possible values: x86, armv7le, etc... PLATFORM ?= x86 #Build profile, possible values: release, debug, profile, coverage BUILD_PROFILE ?= release CONFIG_NAME ?= $(PLATFORM)-$(BUILD_PROFILE) OUTPUT_DIR = build/$(CONFIG_NAME) TARGET = $(OUTPUT_DIR)/$(ARTIFACT) #Compiler definitions CC = qcc -Vgcc_nto$(PLATFORM) CXX = qcc -lang-c++ -Vgcc_nto$(PLATFORM) LD = qcc -Wl,-E -Vgcc_nto$(PLATFORM) #User defined include/preprocessor flags and libraries INCLUDES += -I /home/yona/qnx700/target/qnx7/usr/include/io-pkt/ #INCLUDES += -I/path/to/my/lib/include #INCLUDES += -I../mylib/public #LIBS += -L/path/to/my/lib/$(PLATFORM)/usr/lib -lmylib #LIBS += -L../mylib/$(OUTPUT_DIR) -lmylib #Compiler flags for build profiles CCFLAGS_release += -O2 -D_KERNEL -DQNX_MFIB CCFLAGS_debug += -g -O0 -fno-builtin CCFLAGS_coverage += -g -O0 -ftest-coverage -fprofile-arcs -nopipe -Wc,-auxbase-strip,$@ LDFLAGS_coverage += -ftest-coverage -fprofile-arcs CCFLAGS_profile += -g -O0 -finstrument-functions LIBS_profile += -lprofilingS #Generic compiler flags (which include build type flags) CCFLAGS_all += -Wall -fmessage-length=0 CCFLAGS_all += $(CCFLAGS_$(BUILD_PROFILE)) #Shared library has to be compiled with -fPIC CCFLAGS_all += -fPIC LDFLAGS_all += $(LDFLAGS_$(BUILD_PROFILE)) LIBS_all += $(LIBS_$(BUILD_PROFILE)) DEPS = -Wp,-MMD,$(@:%.o=%.d),-MT,$@ #DEPS = -Wl,-E #Macro to expand files recursively: parameters $1 - directory, $2 - extension, i.e. cpp rwildcard = $(wildcard $(addprefix $1/*.,$2)) $(foreach d,$(wildcard $1/*),$(call rwildcard,$d,$2)) #Source list SRCS = $(call rwildcard, src, c) #Object files list OBJS = $(addprefix $(OUTPUT_DIR)/,$(addsuffix .o, $(basename $(SRCS)))) #Compiling rule $(OUTPUT_DIR)/%.o: %.c @mkdir -p $(dir $@) $(CC) -c $(DEPS) -o $@ $(INCLUDES) $(CCFLAGS_all) $(CCFLAGS) $< #Linking rule $(TARGET):$(OBJS) $(LD) -shared -o $(TARGET) $(LDFLAGS_all) $(LDFLAGS) $(OBJS) $(LIBS_all) $(LIBS) #Rules section for default compilation and linking all: $(TARGET) clean: rm -fr $(OUTPUT_DIR) rebuild: clean all #Inclusion of dependencies (object files to source and includes) -include $(OBJS:%.o=%.d) what could be the issue? Does QNX have more detailed documentation about packet filter development? Thu, 13 Feb 2020 15:03:07 GMT http://community.qnx.com/sf/go/post120261 Yona Shaposhnik 2020-02-13T15:03:07Z post120259: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120259 Either the dlopen() isn't able to open your library (bad path to it, bad permissions, built for the wrong processor type etc) or the dlsym() isn't finding the "iopkt_lsm_entry" symbol that is generated by the line at the end of the file: struct _iopkt_lsm_entry IOPKT_LSM_ENTRY_SYM(pfil) = IOPKT_LSM_ENTRY_SYM_INIT(pfil_entry); Thu, 13 Feb 2020 14:34:21 GMT http://community.qnx.com/sf/go/post120259 Nick Reilly 2020-02-13T14:34:21Z post120257: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120257 I added extra logs, but it looks like lsm loading failed before my code execution. Maybe I do something wrong, but I didn't find any more information about packet filter development in official documentation. I just want to intercept network traffic from/to my network device. Thu, 13 Feb 2020 08:43:05 GMT http://community.qnx.com/sf/go/post120257 Yona Shaposhnik 2020-02-13T08:43:05Z post120254: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120254 io-pkt will call the entry function in the lsm so add some debugging prints to that to determine what is going wrong. Wed, 12 Feb 2020 17:56:01 GMT http://community.qnx.com/sf/go/post120254 Nick Reilly 2020-02-12T17:56:01Z post120253: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120253 I added those defines and warnings gone. But I still have the same error by lsm mount. Wed, 12 Feb 2020 16:45:38 GMT http://community.qnx.com/sf/go/post120253 Yona Shaposhnik 2020-02-12T16:45:38Z post120252: Re: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120252 Take a look at net/pfil.h and you will find all those warnings you are getting are because you are building without _KERNEL and QNX_MFIB defined. If you define those then that should clean up your warnings and the symbols should then correspond to those in io-pkt and your code should work. Wed, 12 Feb 2020 13:59:31 GMT http://community.qnx.com/sf/go/post120252 Nick Reilly 2020-02-12T13:59:31Z post120251: Unable to load packet filter lsm http://community.qnx.com/sf/go/post120251 I'm trying to compile and load the simple packet filter from documentation (http://www.qnx.com/developers/docs/7.0.0/index.html#com.qnx.doc.core_networking/topic/filtering_PF.html) I use SDP 7.0 and RTOS vm on vmware also 7.0 version. the code compiles successfully: qcc -Wl,-E -Vgcc_ntox86 -c -Wp,-MMD,build/x86-release/src/filter_1.d,-MT,build/x86-release/src/filter_1.o -o build/x86-release/src/filter_1.o -I /home/alexandr/qnx700/target/qnx7/usr/include/io-pkt/ -Wall -fmessage-length=0 -O2 -fPIC src/filter_1.c src/filter_1.c: In function 'deinit_module': src/filter_1.c:67:16: warning: implicit declaration of function 'pfil_head_get' [-Wimplicit-function-declaration] pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); ^ src/filter_1.c:67:14: warning: assignment makes pointer from integer without a cast [-Wint-conversion] pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); ^ src/filter_1.c:71:5: warning: implicit declaration of function 'pfil_remove_hook' [-Wimplicit-function-declaration] pfil_remove_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, ^ src/filter_1.c:75:14: warning: assignment makes pointer from integer without a cast [-Wint-conversion] pfh_inet = pfil_head_get(PFIL_TYPE_IFNET, 0); ^ src/filter_1.c: In function 'pfil_entry': src/filter_1.c:95:14: warning: assignment makes pointer from integer without a cast [-Wint-conversion] pfh_inet = pfil_head_get(PFIL_TYPE_AF, AF_INET); ^ src/filter_1.c:99:5: warning: implicit declaration of function 'pfil_add_hook' [-Wimplicit-function-declaration] pfil_add_hook(input_hook, NULL, PFIL_IN | PFIL_WAITOK, ^ src/filter_1.c:104:14: warning: assignment makes pointer from integer without a cast [-Wint-conversion] pfh_inet = pfil_head_get(PFIL_TYPE_IFNET,0); ^ qcc -Wl,-E -Vgcc_ntox86 -shared -o build/x86-release/libfilter_1.so build/x86-release/src/filter_1.o but when I try to load my lsm: # mount -vvv -Tio-pkt /root/lsm-filter.so Parsed: mount from [/root/lsm-filter.so] mount on [NULL] type [io-pkt] exec: mount_io-pkt -o rw -o implied -o nostat /root/lsm-filter.so / Using internal mount (mount_io-pkt not found) Type [io-pkt] Flags 0x80080000 Device [/root/lsm-filter.so] Directory [/] Options [NULL] mount: Can't mount / (type io-pkt) mount: Possible reason: No such device or address and in slog I can see only such line: Feb 12 13:48:15.543 iopkt.81934 main_buffer 0 Unable to load /root/lsm-filter.so: (null) did anybody face such problem? could anybody help me with this? Wed, 12 Feb 2020 13:48:10 GMT http://community.qnx.com/sf/go/post120251 Yona Shaposhnik 2020-02-12T13:48:10Z post120128: Re: QNX6.6 imx6 WiFi driver for ublox http://community.qnx.com/sf/go/post120128 Thanks Niraj, Yes, I'm based in Italy, and I do not know who is our FAE, if one exist. We are supported by a local Qnx partner, but it is more a commercial support than a technical one: they have proposed to me a full qnx wireless framework just to have a wi-fi module driver. For technical questions I use our standard Qnx support or I dig in the ng :-) - Our platform uses an imx6 SOM, we have an SDIO slot or a mPCIe slot - We are interested in both features, WiFi and BT (SPP profile only) but we can support BT via other dedicated chip that we have already used, so BT is not a problem . - For now, WiFi only. STA/AP/P2P could be used in the future under customer request. - Ok for NDA, if customer agree with terms. - We can interface the module via SDIO and UART for BT Thanks, again Mario Thu, 26 Dec 2019 14:56:34 GMT http://community.qnx.com/sf/go/post120128 mario sangalli 2019-12-26T14:56:34Z post120123: Re: QNX6.6 imx6 WiFi driver for ublox http://community.qnx.com/sf/go/post120123 Hi Mario, I think you are based in Italy, right ? Do you know who is your FAE ? I am trying to contact your FAE so that he can contact you. Looks like they are gone for holidays. In the meantime, I got some info/questions from Engineering to get more clarification. The u-blox EMMY-W1 has a Marvell 8887 chipset in it. We do support this for our CSP customers. Need information such as: - what is the target platform? The Marvell 8887 uses an SDIO connection and we currently support on i.mx6 - what features are of interest? BT and WIFI? - WIFI only? STA mode, AP mode, P2P mode? - we have an experimental 8887 driver on SDP 6.6. There is a newer version of the driver/ firmware available on SDP 7.0. - I believe we need to ensure an NDA is in place between Marvell and the company in question in order to provide the binaries. - Which interface (SDIO or UART) is used for Bluetooth for Marvell 8887 on ublox EMMY_W1xx? Please reply these questions at your convenience and your FAE will take over once he comes back. Thanks, - Niraj Fri, 20 Dec 2019 20:10:01 GMT http://community.qnx.com/sf/go/post120123 Niraj Desai 2019-12-20T20:10:01Z post120111: Re: QNX6.6 imx6 WiFi driver for ublox http://community.qnx.com/sf/go/post120111 Niraj Thank You very much, My company is Tecnint HTE, we develops custom hardware solutions and in this case we are proposing an imx6 SOM with QNX6.6 for this project. Customer needs also wiFi and BT, one module should be the Ublox EMMY-W1 (Chip Marvell 88W8887), that has a qnx driver (or so it is declared in the data sheet), so we prefer to use this one, or equivalent, if it is supported. Any help will be appreciate, Thanks again M. Sangalli Mon, 16 Dec 2019 08:43:23 GMT http://community.qnx.com/sf/go/post120111 mario sangalli 2019-12-16T08:43:23Z post120110: Re: QNX6.6 imx6 WiFi driver for ublox http://community.qnx.com/sf/go/post120110 Hi Mario, I am FAE at QNX. Which chipset is in ublox EMMY-W1xx ? Would you mind providing your company name ? Which QNX version you are using ? Thanks, Niraj Fri, 13 Dec 2019 18:53:53 GMT http://community.qnx.com/sf/go/post120110 Niraj Desai 2019-12-13T18:53:53Z post120108: QNX6.6 imx6 WiFi driver for ublox http://community.qnx.com/sf/go/post120108 Hi, is some wifi driver available for such patform ? I've found one for ti-wiLink8, we are evaluating the ublox EMMY-W1xx module, on data-sheet seens that a qnx driver is available , where I can get it? Thanks M. Sangalli Thu, 12 Dec 2019 16:41:03 GMT http://community.qnx.com/sf/go/post120108 mario sangalli 2019-12-12T16:41:03Z post119831: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119831 Hi Nick, Yes, it does seem like that but as I said in my original post I am starting the e1000 driver with the ptp flag. All my testing has been with the -C flag to see the statistics and I always see the following when ptpd and ptpd-avb start up (ptpd notice) 16:04:47.562025 (init) hardware time support = 1, To be clear, when I use ptpd the clocks converge to < 1us offset with +/-1us jitter so I am pretty sure hardware timestamping is enabled. Regards, John Fri, 23 Aug 2019 19:43:53 GMT http://community.qnx.com/sf/go/post119831 John Efstathiades 2019-08-23T19:43:53Z post119830: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119830 Hi John, That almost sounds like it is using software timestamping rather than hardware timestamping. When you load the e1000 driver are you passing "ptp" as an option to it? If you run ptpd with console logging on (-C option) is it showing hardware or software in the startup message? Regards, Nick Fri, 23 Aug 2019 18:12:37 GMT http://community.qnx.com/sf/go/post119830 Nick Reilly 2019-08-23T18:12:37Z post119827: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119827 Hi Lomash, Even with your numbers I cannot get ptpd-avb to synchronise properly with the ptp4l master. I find that the slave clock converges to about 500us from the master clock and then begins oscillate between 500us and 1500us. Thanks for your help on this. I'll try again once I have the latest software. Regards, John Fri, 23 Aug 2019 10:24:55 GMT http://community.qnx.com/sf/go/post119827 John Efstathiades 2019-08-23T10:24:55Z post119822: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119822 I have neighborPropDelayThresh configured at 50000 and min_neighbor_prop_delay at -20000000 Thu, 22 Aug 2019 16:27:55 GMT http://community.qnx.com/sf/go/post119822 Lomash Gupta 2019-08-22T16:27:55Z post119821: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119821 Hi Lomash, Could you tell me what values you used for these parameters, please? John Thu, 22 Aug 2019 15:53:41 GMT http://community.qnx.com/sf/go/post119821 John Efstathiades 2019-08-22T15:53:41Z post119820: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119820 I have had good results while running ptpd-avb on a QNX machine as the slave and directly connected to an Ubuntu 18.04 machine running ptp4l with the gPTP config. But for this to work reliably, I had to update the neighborPropDelayThresh and min_neighbor_prop_delay parameters to appropriate values on the linux side. I would also recommend using the 7.0.4 release. Thu, 22 Aug 2019 14:51:51 GMT http://community.qnx.com/sf/go/post119820 Lomash Gupta 2019-08-22T14:51:51Z post119819: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119819 Hi Lomash, > Our test records indicate a sync time of 9 minutes while running ptpd and > about 5 when running ptpd-avb. Thanks for the data. > Regarding ptpd-avb not syncing, are you using ptpd-avb as the master? If so, > we recently discovered a bug in the implementation which affects time sync. So > , you could ask QNX support for an experimental release with the fix for JI: > 2773204. Yes, I was using ptpd-avb as the master. I will ask for this release > However, ptpd-avb should still be able to sync with a ptp4l master without the > fix.You could try that out. I have tried and got mixed results. I did see it synchronise once but most of the time it does not. More specifically, the slave clock diverges from the master rather than converging. Most of my tests were using P2P with direct cable connection between master and slave interfaces. I am using ptp4l version 1.6 on Ubuntu 16.04 with i210 master. What is your test configuration? John Wed, 21 Aug 2019 20:01:22 GMT http://community.qnx.com/sf/go/post119819 John Efstathiades 2019-08-21T20:01:22Z post119818: Re: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119818 Hi John, Our test records indicate a sync time of 9 minutes while running ptpd and about 5 when running ptpd-avb. Regarding ptpd-avb not syncing, are you using ptpd-avb as the master? If so, we recently discovered a bug in the implementation which affects time sync. So, you could ask QNX support for an experimental release with the fix for JI:2773204. However, ptpd-avb should still be able to sync with a ptp4l master without the fix.You could try that out. Thanks, Lomash Wed, 21 Aug 2019 19:12:53 GMT http://community.qnx.com/sf/go/post119818 Lomash Gupta 2019-08-21T19:12:53Z post119817: ptpd very long synchronisation time http://community.qnx.com/sf/go/post119817 Hello, I am running ptpd with an Intel i210 to synchronise a PTP slave. The e1000 driver is started with the ptp flag so the hardware timestamp support in the i210 is enabled. I'm starting ptpd in foreground to see the statistics on the console as follows: # ptpd -C -g -L -b wm0 The ptpd takes a very long time to synchronise the slave PTP time to the master - anything from several minutes to more than 10 minutes. The time required to converge the clocks appears to be dependent on the initial clock offset. For example, If the offset is tens or hundreds of milliseconds the convergence is linear until the offset less than one millisecond at which point it takes a non-linear path, still slowly. By comparison, Linux ptp4l takes about 30 seconds to synchronise the clocks regardless of the initial clock offset. The PTP master in my configuration is another QNX system with an i210 interface with hardware timestamps enabled. The same behaviour is seen if the master is a Linux machine running ptp4l. I've tried E2E and P2P - behaviour is the same. ptpd-avb appears to be broken - the slave does not syncrhonise at all. I am using the generic x86_64 BSP on a Skylake target board. My ptpd is from package com.qnx.sdp.target.net.ptp/7.0.1031.S201810151344 I have also tried teh new SDP 7.0.4 release (com.qnx.sdp.target.net.ptp/7.0.4247.S201906281113) but the behaviour is the same. Is the behaviour I am seeing normal for QNX ptpd? If not, what is the expected QNX ptpd slave convergence time? 10 minutes is way too long to be useful. Thanks, John Wed, 21 Aug 2019 16:30:21 GMT http://community.qnx.com/sf/go/post119817 John Efstathiades 2019-08-21T16:30:21Z post119811: Re: Intel I211 driver for QNX 6.5 http://community.qnx.com/sf/go/post119811 Got it. Thank you. Tue, 20 Aug 2019 00:18:42 GMT http://community.qnx.com/sf/go/post119811 Kok Keong Neo 2019-08-20T00:18:42Z post119810: Re: Intel I211 driver for QNX 6.5 http://community.qnx.com/sf/go/post119810 I have attached a desktop build of the latest e1000 driver. On 2019-08-19, 1:37 AM, "Kok Keong Neo" <community-noreply@qnx.com> wrote: Hi, where can I get the latest working the network driver for Intel I211 0x1539? I am using the current driver in QNX 6.5.0 SP1 devnp-e1000.so but it cannot find I211 _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119809 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 19 Aug 2019 12:51:34 GMT http://community.qnx.com/sf/go/post119810 Hugh Brown 2019-08-19T12:51:34Z post119809: Intel I211 driver for QNX 6.5 http://community.qnx.com/sf/go/post119809 Hi, where can I get the latest working the network driver for Intel I211 0x1539? I am using the current driver in QNX 6.5.0 SP1 devnp-e1000.so but it cannot find I211 Mon, 19 Aug 2019 05:38:06 GMT http://community.qnx.com/sf/go/post119809 Kok Keong Neo 2019-08-19T05:38:06Z post119798: Re: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119798 Is your sam driver actually successfully creating an interface? Tue, 30 Jul 2019 20:29:19 GMT http://community.qnx.com/sf/go/post119798 Nick Reilly 2019-07-30T20:29:19Z post119795: Re: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119795 for some reason I do not see newly added interface even if I provide the prefix ----------------- # io-pkt-v6-hc -i1 -dsam -p tcpip prefix=/alt # ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 en0: flags=80008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,SHIM> mtu 1500 address: 5e:24:04:00:7e:dd media: Ethernet 100baseTX full-duplex status: active inet 10.10.70.189 netmask 0xfffffe00 broadcast 10.10.71.255 inet6 fe80::5c24:4ff:fe00:7edd%en0 prefixlen 64 scopeid 0x2 # SOCK=/alt ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 # ls -l /alt/dev/socket/ total 0 srw-rw-rw- 1 root root 0 Jan 07 22:04 1 srw-rw-rw- 1 root root 0 Jan 07 22:04 17 .... ---------------------------------------------- Tue, 30 Jul 2019 18:17:10 GMT http://community.qnx.com/sf/go/post119795 Rajesh Jarang(deleted) 2019-07-30T18:17:10Z post119789: Re: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119789 There are some important parameters missing in the startup of the 2nd io-pkt that are causing it to clobber the 1st invocation of io-pkt. 1st stack (instance 0) startup is fine: # io-pkt-v4-hc -dsmsc9500 mac=5e2404007edd # ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 en0: flags=80008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,SHIM> mtu 1500 address: 5e:24:04:00:7e:dd media: Ethernet 100baseTX full-duplex status: active inet 10.10.70.189 netmask 0xff000000 broadcast 10.255.255.255 # 2nd stack needs both an instance and a prefix so it does not clobber the 1st (instance 0) stack: # io-pkt-v6-hc -i1 -dsam -p tcpip prefix=/alt # SOCK=/alt ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 sam0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500 address: 00:01:02:03:04:05 # For all commands referencing the 1st stack (instance 0), issue them directly without a SOCK environment variable. For all commands referencing the 2nd stack (instance 1 at prefix /alt), the SOCK=/alt environment variable is needed. Prefixes other than '/alt' can be used -- reference the io-pkt documentation for use of the prefix parameter. Sat, 27 Jul 2019 00:36:11 GMT http://community.qnx.com/sf/go/post119789 Dean Denter 2019-07-27T00:36:11Z post119788: Re: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119788 My goal is to start two network stacks io-pkt-v4-hc (habdling ipv4 only) and io-pkt-v6-hc (for ipv6) and make sure that it is shown in ifconfig. in the above command sequence, even if I run "mount -T io-pkt devnp-sam.so" ifconfig does not show en0 interface at all. See command/output below below (1) Why ifconfig not showing en0? pidin shows both stacks running. (2) I want make two separate interfaces one for ipv4 (exclusively for ipv4) and other ipv6. What are the correct steps. ---------------------------------------------------- # io-pkt-v4-hc -dsmsc9500 mac=5e2404007ede # ifconfig en0 10.10.70.189 # ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 en0: flags=80008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,SHIM> mtu 1500 address: 5e:24:04:00:7e:de media: Ethernet 100baseTX full-duplex status: active inet 10.10.70.189 netmask 0xff000000 broadcast 10.255.255.255 # io-pkt-v6-hc # mount -T io-pkt devnp-sam.so # ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 sam0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500 address: 00:01:02:03:04:05 # pidin arg pid Arguments ...... 49170 io-pkt-v4-hc -dsmsc9500 mac=5e2404007edd 69651 io-pkt-v6-hc 102420 pidin arg ------------------------------------------------------------------------------------------ Fri, 26 Jul 2019 22:15:31 GMT http://community.qnx.com/sf/go/post119788 Rajesh Jarang(deleted) 2019-07-26T22:15:31Z post119787: Re: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119787 Hi Rajesh By calling io-pkt-v6-hc you are starting a new network stack with no interfaces except lo0. To add a new network interface to a running stack you must use the mount command: mount -T io-pkt devnp-sam.so Fri, 26 Jul 2019 07:30:16 GMT http://community.qnx.com/sf/go/post119787 Peter Huber 2019-07-26T07:30:16Z post119786: ifconfig command does not show all the interface after adding new interface http://community.qnx.com/sf/go/post119786 I am using panda board for adding new network driver. for showing example I used Sample Driver (sam.c) provided by qnx. But following result can be reproduce with other drivers too. here is the excerpts of the command i ran and text added between /* */ are my comments. As you see below, en0 interface disappears as soon as another new driver is added. Is there anything 'ifconfig' looking for? Why it is not showing all the interfaces? -------------------------------------------------------------------------------------------------------------------------- # io-pkt-v4-hc -dsmsc9500 mac=5e2404007ede /* Loading the ethernet driver */ # ifconfig lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 en0: flags=80008802<BROADCAST,SIMPLEX,MULTICAST,SHIM> mtu 1500 address: 5e:24:04:00:7e:de media: Ethernet 100baseTX full-duplex status: active # ifconfig en0 10.10.70.189 # ifconfig /* ifconfig correctly shows the two interfaces lo0 and en0) lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 en0: flags=80008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,SHIM> mtu 1500 address: 5e:24:04:00:7e:dd media: Ethernet 100baseTX full-duplex status: active inet 10.10.70.189 netmask 0xff000000 broadcast 10.255.255.255 # io-pkt-v6-hc -i1 -dsam /* loading the second driver (sample driver) */ # ifconfig /* it shows sam0 correctly but en0 disappeard !!!! */ lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 sam0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 1500 address: 00:01:02:03:04:05 ------------------------------------------------------------------------------------------------------------------------ Thu, 25 Jul 2019 19:11:03 GMT http://community.qnx.com/sf/go/post119786 Rajesh Jarang(deleted) 2019-07-25T19:11:03Z post119782: Re: Intel I219 Gigabit ethernet support in devnp-e1000.so ? http://community.qnx.com/sf/go/post119782 When I used this driver, it reported the following error. "ldd:FATAL: Unresolved symbol "stk_context_callback_2" called from devnp-e1000.so" but with the latest networking manager io-pkt-v4-hc you provided, it works fine. Hugh,thank you for your help. > I have attached a desktop build of the latest 6.5.0 e1000 driver. > > Hugh. > > On 2019-07-19, 5:54 AM, "chen Wang" <community-noreply@qnx.com> wrote: > > Can the devnp-e1000.so driver in the accessory be used in the QNX 6.5.0 > SP1? > > Driver is attached. > > > > > > > > On 2016-11-17, 11:17 AM, "Davide Ancri" <community-noreply@qnx.com> wrote: > > > > >great news Hugh! > > > > > >a desktop build would be really nice for completing my tests, then we > > >will wait the official one for the product release. > > > > > >thanks a lot! > > >Davide > > > > > >> This device is supported in the latest 6.6.0 e1000 driver. If you > want > > >>the > > >> official version, you will have to request it through your sales rep, > > > >> otherwise I can give you a desktop build of the driver. > > >> > > >> > > >> > > >> On 2016-11-17, 10:33 AM, "Davide Ancri" <community-noreply@qnx.com> > > >>wrote: > > >> > > >> >hi all > > >> > > > >> >I'm trying to play with a new pc card mounting an embedded Intel > I219 > > >> >Gigabit ethernet device: > > >> > > > >> >Class = Network (Ethernet) > > >> >Vendor ID = 8086h, Intel Corporation > > >> >Device ID = 156fh, Unknown Unknown > > >> > > > >> >but devnp-e1000.so (qnx 6.6) cannot recognize it. > > >> > > > >> >Anyone knows if I219 support is planned in future e1000 version? > > >> >Or maybe is there a beta version already supporting it? > > >> > > > >> >thanks a lot > > >> >Davide > > >> > > > >> > > > >> > > > >> >_______________________________________________ > > >> > > > >> >Networking Drivers > > >> >http://community.qnx.com/sf/go/post117134 > > >> >To cancel your subscription to this discussion, please e-mail > > >> >drivers-networking-unsubscribe@community.qnx.com > > >> > > > > > > > > > > > > > > > > > > > > >_______________________________________________ > > > > > >Networking Drivers > > >http://community.qnx.com/sf/go/post117136 > > >To cancel your subscription to this discussion, please e-mail > > >drivers-networking-unsubscribe@community.qnx.com > > > > > > > > > _______________________________________________ > > Networking Drivers > http://community.qnx.com/sf/go/post119780 > To cancel your subscription to this discussion, please e-mail drivers- > networking-unsubscribe@community.qnx.com > > Mon, 22 Jul 2019 07:38:34 GMT http://community.qnx.com/sf/go/post119782 chen Wang(deleted) 2019-07-22T07:38:34Z post119781: Re: Intel I219 Gigabit ethernet support in devnp-e1000.so ? http://community.qnx.com/sf/go/post119781 I have attached a desktop build of the latest 6.5.0 e1000 driver. Hugh. On 2019-07-19, 5:54 AM, "chen Wang" <community-noreply@qnx.com> wrote: Can the devnp-e1000.so driver in the accessory be used in the QNX 6.5.0 SP1? > Driver is attached. > > > > On 2016-11-17, 11:17 AM, "Davide Ancri" <community-noreply@qnx.com> wrote: > > >great news Hugh! > > > >a desktop build would be really nice for completing my tests, then we > >will wait the official one for the product release. > > > >thanks a lot! > >Davide > > > >> This device is supported in the latest 6.6.0 e1000 driver. If you want > >>the > >> official version, you will have to request it through your sales rep, > >> otherwise I can give you a desktop build of the driver. > >> > >> > >> > >> On 2016-11-17, 10:33 AM, "Davide Ancri" <community-noreply@qnx.com> > >>wrote: > >> > >> >hi all > >> > > >> >I'm trying to play with a new pc card mounting an embedded Intel I219 > >> >Gigabit ethernet device: > >> > > >> >Class = Network (Ethernet) > >> >Vendor ID = 8086h, Intel Corporation > >> >Device ID = 156fh, Unknown Unknown > >> > > >> >but devnp-e1000.so (qnx 6.6) cannot recognize it. > >> > > >> >Anyone knows if I219 support is planned in future e1000 version? > >> >Or maybe is there a beta version already supporting it? > >> > > >> >thanks a lot > >> >Davide > >> > > >> > > >> > > >> >_______________________________________________ > >> > > >> >Networking Drivers > >> >http://community.qnx.com/sf/go/post117134 > >> >To cancel your subscription to this discussion, please e-mail > >> >drivers-networking-unsubscribe@community.qnx.com > >> > > > > > > > > > > > > > >_______________________________________________ > > > >Networking Drivers > >http://community.qnx.com/sf/go/post117136 > >To cancel your subscription to this discussion, please e-mail > >drivers-networking-unsubscribe@community.qnx.com > _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119780 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Fri, 19 Jul 2019 12:31:25 GMT http://community.qnx.com/sf/go/post119781 Hugh Brown 2019-07-19T12:31:25Z post119780: Re: Intel I219 Gigabit ethernet support in devnp-e1000.so ? http://community.qnx.com/sf/go/post119780 Can the devnp-e1000.so driver in the accessory be used in the QNX 6.5.0 SP1? > Driver is attached. > > > > On 2016-11-17, 11:17 AM, "Davide Ancri" <community-noreply@qnx.com> wrote: > > >great news Hugh! > > > >a desktop build would be really nice for completing my tests, then we > >will wait the official one for the product release. > > > >thanks a lot! > >Davide > > > >> This device is supported in the latest 6.6.0 e1000 driver. If you want > >>the > >> official version, you will have to request it through your sales rep, > >> otherwise I can give you a desktop build of the driver. > >> > >> > >> > >> On 2016-11-17, 10:33 AM, "Davide Ancri" <community-noreply@qnx.com> > >>wrote: > >> > >> >hi all > >> > > >> >I'm trying to play with a new pc card mounting an embedded Intel I219 > >> >Gigabit ethernet device: > >> > > >> >Class = Network (Ethernet) > >> >Vendor ID = 8086h, Intel Corporation > >> >Device ID = 156fh, Unknown Unknown > >> > > >> >but devnp-e1000.so (qnx 6.6) cannot recognize it. > >> > > >> >Anyone knows if I219 support is planned in future e1000 version? > >> >Or maybe is there a beta version already supporting it? > >> > > >> >thanks a lot > >> >Davide > >> > > >> > > >> > > >> >_______________________________________________ > >> > > >> >Networking Drivers > >> >http://community.qnx.com/sf/go/post117134 > >> >To cancel your subscription to this discussion, please e-mail > >> >drivers-networking-unsubscribe@community.qnx.com > >> > > > > > > > > > > > > > >_______________________________________________ > > > >Networking Drivers > >http://community.qnx.com/sf/go/post117136 > >To cancel your subscription to this discussion, please e-mail > >drivers-networking-unsubscribe@community.qnx.com > Fri, 19 Jul 2019 09:54:51 GMT http://community.qnx.com/sf/go/post119780 chen Wang(deleted) 2019-07-19T09:54:51Z post119779: Re: Intel I219 Gigabit ethernet support in devnp-e1000.so ? http://community.qnx.com/sf/go/post119779 Hi Hugh,Does the intel i219 have a driver that supports QNX 6.5.0 sp1? Fri, 19 Jul 2019 09:52:09 GMT http://community.qnx.com/sf/go/post119779 chen Wang(deleted) 2019-07-19T09:52:09Z post119730: Re: RTL8812BU USB Wifi driver http://community.qnx.com/sf/go/post119730 Yeah. I have a board design based upon the Octavo AM335x in development. We have provision for SDIO for the WiLink8. Short term I need a USB solution. Wed, 05 Jun 2019 18:00:37 GMT http://community.qnx.com/sf/go/post119730 Todd Peterson 2019-06-05T18:00:37Z post119729: Re: RTL8812BU USB Wifi driver http://community.qnx.com/sf/go/post119729 If you are looking for USB WiFi under 6.5.0, then the only ones are the old Ralink etc. The Wilink8 driver isn't a USB driver. On 2019-06-05, 1:17 PM, "Todd Peterson(deleted)" <community-noreply@qnx.com> wrote: Thanks. Is there a current list of supported wifi chipsets? The ones that are installed with QNX6.5 are really old Ralink chipsets. I did find a WiLink8 driver. Will try to find an appropriate device to try it out. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119728 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 05 Jun 2019 17:48:49 GMT http://community.qnx.com/sf/go/post119729 Hugh Brown 2019-06-05T17:48:49Z post119728: Re: RTL8812BU USB Wifi driver http://community.qnx.com/sf/go/post119728 Thanks. Is there a current list of supported wifi chipsets? The ones that are installed with QNX6.5 are really old Ralink chipsets. I did find a WiLink8 driver. Will try to find an appropriate device to try it out. Wed, 05 Jun 2019 17:17:45 GMT http://community.qnx.com/sf/go/post119728 Todd Peterson 2019-06-05T17:17:45Z post119727: Re: RTL8812BU USB Wifi driver http://community.qnx.com/sf/go/post119727 AFAIK, we have no intention to support this device. It would require several months of development. On 2019-05-31, 6:17 PM, "Todd Peterson(deleted)" <community-noreply@qnx.com> wrote: For ARM Beagleboard (DM3730), Beaglebone Black or Pocket Beagle (AM335x). Running QNX 6.5 on Beagleboard or 6.6 on the last two. Does this driver exist? If not, how much work is required to make it work? The critter works on Linux on those boards. Here a link to the actual device that I have in hand. https://www.amazon.com/Realtek-RTL8192AU-Wireless-Adapter-Antenna/dp/B078NSSM7W Any help is greatly appreciated. Thanks, Todd _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119714 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 05 Jun 2019 14:14:37 GMT http://community.qnx.com/sf/go/post119727 Hugh Brown 2019-06-05T14:14:37Z post119714: RTL8812BU USB Wifi driver http://community.qnx.com/sf/go/post119714 For ARM Beagleboard (DM3730), Beaglebone Black or Pocket Beagle (AM335x). Running QNX 6.5 on Beagleboard or 6.6 on the last two. Does this driver exist? If not, how much work is required to make it work? The critter works on Linux on those boards. Here a link to the actual device that I have in hand. https://www.amazon.com/Realtek-RTL8192AU-Wireless-Adapter-Antenna/dp/B078NSSM7W Any help is greatly appreciated. Thanks, Todd Wed, 29 May 2019 04:17:38 GMT http://community.qnx.com/sf/go/post119714 Todd Peterson 2019-05-29T04:17:38Z post119695: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119695 Hello Nick, thanks for your detailed explanation. Now things got much clearer to me. What a high price to make the netbsd stuff usable for a clean structured realtime OS... Thanks again Michael Mon, 06 May 2019 10:32:52 GMT http://community.qnx.com/sf/go/post119695 Michael Tasche 2019-05-06T10:32:52Z post119694: Relationship between pagesize and mclbytes http://community.qnx.com/sf/go/post119694 I want to support 9000 byes jumbo frames and want to create clusters with 9000 byte buffers. Is setting mclbytes to 9000 the right way to do this? What value should pagesize be set to for optimum performance? In the Core Networking Guide the jumbo frames example should both parameters set to 8192 to support a jumbo MTU of 8100 but does not explain the significance of the pagesize value or its relationship to mclbytes. Mon, 06 May 2019 08:13:45 GMT http://community.qnx.com/sf/go/post119694 John Efstathiades 2019-05-06T08:13:45Z post119690: Re: dev_attach parameters http://community.qnx.com/sf/go/post119690 I'll add this information to the docs. Wed, 01 May 2019 15:23:47 GMT http://community.qnx.com/sf/go/post119690 Steve Reid 2019-05-01T15:23:47Z post119689: Re: dev_attach parameters http://community.qnx.com/sf/go/post119689 Hi Nick, Thanks - exactly what I was after! Regards, John Wed, 01 May 2019 15:22:05 GMT http://community.qnx.com/sf/go/post119689 John Efstathiades 2019-05-01T15:22:05Z post119688: Re: dev_attach parameters http://community.qnx.com/sf/go/post119688 Hi John, Here's the function signature: int dev_attach(char *drvr, char *options, struct cfattach *ca, void *cfat_arg, int *single, struct device **devp, int (*print)(void *, const char *)) drvr is a string that is used for the interface name e.g. "sc" ends up creating an interface "sc0" by default. options is the options string that was passed to the driver - this is parsed by dev_attach() looking for "name", "lan" and "unit" options which will override the the default naming of the interface. "lan" and "unit" are identical in meaning and will override the number appended to the interface naming rather than it just being a sequential number of all the ones of that type. "name" overrides the "drvr" string. ca is the structure formed by the CFATTACH_DECL() macro which specifies the size of the device structure and the attach and detach functions. cfat_arg is the attach arg - comes through to the driver attach function as parameter 3 single isn't really used, if the "lan" or "unit" option is in the options string then it gets set to 1, but that's it. devp is used in two ways. First of all if it is set to non-NULL on entry then it specifies the parent device that this device is a child of. I don't recall a driver ever actually using this - there is a check on removal that the device being removed is not the parent of any remaining devices. All drivers I have seen set it to NULL to specify that there is no parent. Secondly it is set as a pointer to the dev structure that is also passed to the attach function as parameter 2 so is a way of retrieving the device structure after the call to dev_attach - again I don't recall any driver using this. I have seen a bugs where this was not initialised and luckily pointed to NULL until a code change was made when it pointed to a random value and caused the driver to crash, so please set it to NULL! print I've never seen used, always set to NULL. dev_attach() actually does: if (print != NULL) (*print)(cfat_arg, NULL); so I suppose you could use it for debugging. Regards, Nick. Wed, 01 May 2019 14:17:07 GMT http://community.qnx.com/sf/go/post119688 Nick Reilly 2019-05-01T14:17:07Z post119687: dev_attach parameters http://community.qnx.com/sf/go/post119687 Can someone provide a definitive description of the parameters taken by dev_attach()? There are several posts in this forum that have asked this question going back to 2009 but no clear answer that I can find. In particular, what does dev_attach() do with the 5th, 6th parameters? What are the expected values and how/when are they updated by dev_attach(). How is the 7th parameter used? Thanks, John Wed, 01 May 2019 09:38:51 GMT http://community.qnx.com/sf/go/post119687 John Efstathiades 2019-05-01T09:38:51Z post119686: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119686 All driver callbacks expect to run in stack context with the exception of receive and start (transmit), so the following expect to be in stack context: Driver entry Attach Detach Init Stop Ioctl The driver receive function runs in an interrupt worker context. While the driver start function is often called from stack context, it must also be capable of running from interrupt worker context - in the case of bridging or fast forwarding then the Rx on the ingress interface will directly call the Tx on the egress interface without passing in to stack context. You may be lucky and have some driver callbacks not fail when called from a context other than stack context, but this is just luck with that particular driver. Stack context is single threaded but with multiple coroutines in a run to completion model, but the coroutines may choose to yield to avoid blocking stack context (which could block traffic on other interfaces that needs stack context). The stack context coroutines yield by calling ltsleep(), and nic_delay() is a wrapper for this. The e1000 driver (and most drivers) use this when waiting for hardware. The nic_mutex() call is used to serialise operations to the hardware - think of it as a standard mutex that works across stack context coroutines, but note that it introduces a yielding point as it may not be able to acquire the lock. To answer your other questions: The memcpy() isn't going to yield stack context or access the hardware so it doesn't need to be inside the mutex, nic_mutex() is taken around the update stats call because that is going to read hardware and we need to ensure that other operations are not happening on the hardware at the same time. ifq_enqueue(ifp,m) is potentially going to call the driver start routine. While this doesn't need to be in stack context, it does need to be an mbuf handling thread - so an nw_pthread and not just a pthread. mbufs should only be touched from nw_pthreads and not plain pthreads, but note that an nw_pthread needs to have a quiesce function for when io-pkt is updating certain structures. Ensuring that the quiesce can happen in a multi-threaded scenario takes careful planning to avoid deadlocking. Please name any threads that you do create - by default they get an io-pkt#0x01 style name which makes people think that they are io-pkt threads and not that they belong to an lsm! Regards, Nick Tue, 30 Apr 2019 13:50:27 GMT http://community.qnx.com/sf/go/post119686 Nick Reilly 2019-04-30T13:50:27Z post119685: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119685 Hi Nick, > Relying on the io-pkt resource manager places everything in the stack context > and thus the concerns I mentioned in my earlier message with performance, both > io-pkt affecting your code and your code affecting io-pkt. Best performance > would be to run in your own thread with your own resource manager and only do > the stk_context_callback_2(), kthread_create1() dance for those pieces of > code that need to run in io-pkt stack context. the problem is finding out, which functions have to be called in stack context and which have not. Unfortunately this can be driver dependent. e.g. Our lsm sometimes reads the statistic counters with a simple direct call (ifp->if_ioctl()) to the driver. Here we got the assertion in the devctl part of the QNX7 e1000 driver, using a nic_mutex: case DRVCOM_STATS: dstp = (struct drvcom_stats *)ifdc; if (ifdc->ifdc_len != sizeof (nic_stats_t)) { error = EINVAL; break; } nic_mutex_lock (&i82544->drv_mutex); update_stats (i82544); nic_mutex_unlock (&i82544->drv_mutex); memcpy (&dstp->dcom_stats, &i82544->stats, sizeof (i82544->stats)); break; This e1000 driver seems to expect more than one thread of the BSD-scheduler could run through this ioctl. ?! Is that possible? Is there a rule? e.g. : "You must call the ioctl entry of a driver with the stack-context!" ???? BTW.: Why is the memcpy() not in the critical section. OK, let us hope, src and dest is aligned and the memcpy() does not copy by moving bytes. ;) What about other calls? Our lsm uses ifq_enqueue(ifp, m) to send a packet. This seems to work without the stack context, but how is the rule here? What about the mbuf handling functions? Kind Regards Michael Tue, 30 Apr 2019 10:07:32 GMT http://community.qnx.com/sf/go/post119685 Michael Tasche 2019-04-30T10:07:32Z post119683: io-pkt-v4-hc hangs in OMAPL138 http://community.qnx.com/sf/go/post119683 We use QNX6.5.0 sp1 on OMAP-L138 based custom SOC. ifconfig and netstat stop responding when the SOC connected to a LAN with multiple devices. Pidin outputs - one of the threads of io-pkt-v4-hc is in nano sleep io-pkt-v4-hc 21r NANOSLEEP When the device is connected to a smaller network with a Couple of devices unit is observed to work without any issues. The io-pkt driver is started using the following command io-pkt-v4-hc -d /lib/devnp-til1xx.so mac=010203040506 -ptcpip Is there any Known issue with the OMAP-L138 ethernet driver that I am not aware of? Fri, 26 Apr 2019 21:25:33 GMT http://community.qnx.com/sf/go/post119683 Abilash Janakiraman 2019-04-26T21:25:33Z post119682: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119682 Sorry, I forgot. You would need to call stk_context_callback_2() to get across from your thread to io-pkt stack context, but then this runs within proc0 on stack context so you would then need to kthread_create1() to get on to a proper coroutine. Relying on the io-pkt resource manager places everything in the stack context and thus the concerns I mentioned in my earlier message with performance, both io-pkt affecting your code and your code affecting io-pkt. Best performance would be to run in your own thread with your own resource manager and only do the stk_context_callback_2(), kthread_create1() dance for those pieces of code that need to run in io-pkt stack context. I did mention that the io-pkt threading model is highly complicated ;-) Regards, Nick Fri, 26 Apr 2019 13:50:48 GMT http://community.qnx.com/sf/go/post119682 Nick Reilly 2019-04-26T13:50:48Z post119681: Re: Interface specific hooks of protocol driver (lsm-XXX.so) attached with PFIL_TYPE_IFNET are broken in QNX 7 http://community.qnx.com/sf/go/post119681 Hi Oliver, Please get in touch with your support contact, they should be able to help you with roll out plans for this fix. Fixes are normally released through QNX Software Center, either customer specific or general release. For your earlier questions: 1) There is no issue with casting a pointer to a u_long as QNX uses LP64 on 64 bit (and ILP32 on 32 bit) http://www.qnx.com/developers/docs/7.0.0/#com.qnx.doc.neutrino.prog/topic/64bits_data_types.html 2) Unfortunately there isn't an easy way for a customer to determine the exact flags that were used to build the io-pkt binary. If you have source level access then you would be able to see the common.mk files and see that they contain a define for QNX_MFIB. Regards, Nick Fri, 26 Apr 2019 13:44:13 GMT http://community.qnx.com/sf/go/post119681 Nick Reilly 2019-04-26T13:44:13Z post119680: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119680 Hello Nick, > Most driver callbacks including the ifp->if_ioctl() expect to run in stack > context so you will need to use stk_context_callback_2() to get the code to > run in the stack context. "stk_context_callback_2()" seems not to be enough. We still get the "proc0" assertion. The e1000 driver additional uses kthread_create1() inside of the stack context callback. Why? In the meantime we use another approach: We removed our resmgr thread and connect our interface to the normal io-pkt resmgr threads by using their dispatch handle "skt_ctl.dpp". This seems to work perfectly, but I wonder, if this approach could have any drawbacks. Kind Regards Michael Fri, 26 Apr 2019 10:37:03 GMT http://community.qnx.com/sf/go/post119680 Michael Tasche 2019-04-26T10:37:03Z post119679: Re: Interface specific hooks of protocol driver (lsm-XXX.so) attached with PFIL_TYPE_IFNET are broken in QNX 7 http://community.qnx.com/sf/go/post119679 Hi Nick, thank you for your fast reply and valuable answer. But our affected products are a middleware (protocol stacks) which are installed on customer systems and the customer will develop it's application on top. Especially for our customers who are in the process to upgrade existing 6.x installations to 7.x it would be a little bit surprising to tell them they need a newer io-pkt they have to request with the ID mentioned in your mail. I wonder when this modified version will be distributed on more common channels like the QNX Software Center so we can write in our prerequisites for our product you need at least to update your io-pkt on QNX 7 to version x.y.z. Can you also give a short statement to the related ambiguities mentioned in question 1 and 2 of my initial post. Best regards, Oliver Fri, 26 Apr 2019 09:58:19 GMT http://community.qnx.com/sf/go/post119679 Oliver Thimm 2019-04-26T09:58:19Z post119671: Re: Can WTP be used anywhere a (struct nw_work_thread *) is required? http://community.qnx.com/sf/go/post119671 Hi Nick, Just what I needed - thanks for clarifying. Regards, John Tue, 23 Apr 2019 08:17:49 GMT http://community.qnx.com/sf/go/post119671 John Efstathiades 2019-04-23T08:17:49Z post119670: Re: Can WTP be used anywhere a (struct nw_work_thread *) is required? http://community.qnx.com/sf/go/post119670 Hi John, It actually doesn't matter currently - everywhere the wtp is passed to the driver it has come from the WTP macro. I would suggest that it's probably best to use the wtp passed in to the interrupt handling threads, but there is currently no situation where it is actually required. Regards, Nick. Mon, 22 Apr 2019 12:57:21 GMT http://community.qnx.com/sf/go/post119670 Nick Reilly 2019-04-22T12:57:21Z post119669: Can WTP be used anywhere a (struct nw_work_thread *) is required? http://community.qnx.com/sf/go/post119669 In an interrupt handling thread can I use m_getcl() and m_freem() instead of m_getcl_wtp() and m_freem_wtp() respectively? Similarly for NW_SIGLOCK() and NW_SIGLOCK_P(). Looking at different driver source there is inconsistent usage of these routines in interrupt handling threads, sometimes the wtp pointer form is used in other places the WTP form is used. What are the rules for using WTP? Sat, 20 Apr 2019 11:24:46 GMT http://community.qnx.com/sf/go/post119669 John Efstathiades 2019-04-20T11:24:46Z post119668: Re: Interface specific hooks of protocol driver (lsm-XXX.so) attached with PFIL_TYPE_IFNET are broken in QNX 7 http://community.qnx.com/sf/go/post119668 Hi Oliver, We found and resolved this issue in January with the interface hook not being called when it should. Please get in touch with your support contact and ask about issue ID 2665276 COREOS-114498 and they should be able to provide you with an updated io-pkt that resolves this. Regards, Nick Thu, 18 Apr 2019 18:47:33 GMT http://community.qnx.com/sf/go/post119668 Nick Reilly 2019-04-18T18:47:33Z post119667: Interface specific hooks of protocol driver (lsm-XXX.so) attached with PFIL_TYPE_IFNET are broken in QNX 7 http://community.qnx.com/sf/go/post119667 Hi, we are a provider of Industrial Ethernet protocol stack solutions. For the required high performance low latency communication we have developed an own protocol driver similar to your "lsm-nraw" which runs in the stack context to bypass the IP layer but with a much more optimized data exchange to the application. We are using this implementation on all 6.x versions since QNX 6.4 without any problems. Unfortunately our implementation seems to be broken in QNX 7 although it is compiled without any errors. In our implementation we call pfil_head_get(int af, u_long dlt) with af set to PFIL_TYPE_IFNET to indicate an interface hook and dlt set to the respective interface pointer according to http://www.qnx.com/developers/docs/6.4.1/io-pkt_en/user_guide/filtering.html. Now I have two initial concerns/questions: 1. The interface pointer has to be casted to u_long which obviously is a little bit fishy on a 64 bit architecture. 2. At some point in time QNX introduced so called Forwarding Information Base (FIB) support which is documented poorly or more or less not at all. In the header <pfil.h> which is included to build our protocol driver the define QNX_MFIB changes the hook prototype as first argument of pfil_add_hook() from int (*func)(void *, struct mbuf **, struct ifnet *, int) into int (*func)(void *, struct mbuf **, struct ifnet *, int, int) The last parameter seems to be FIB related. How can one figure out at compile time how QNX is built or where is this build option for the different QNX versions documented ? All this are more or less oddnesses one can overcome but starting with QNX 7 the hook does not seem to be called at all. After several hours of investigation we changed our implementation to hook with pfil_head_get(PFIL_TYPE_IFT, IFT_ETHER) instead of the interface specific implementation PFIL_TYPE_IFNET mentioned above and this is still working on QNX 7 with the obvious drawback that we have to handle all mbufs which are not for our interface, too. Now we got curious and analyzed the ether_input() routine where both hooks are handled. Compared to earlier implementations again the FIB support seems to make the difference. In previous QNX versions the hooks get called unconditionally but now a call to if_get_next_fib() is performed before a hook is called. The 2nd parameter of this call is a so called start_fib which is initialized to -1 before the call to the hooks which are attached with PFIL_TYPE_IFT. The if_get_next_fib() is called again before the hooks attached with PFIL_TYPE_IFNET are called but this time the start_fib is not reset to -1 again but has the value returned from the previous call (which is 16). This causes the call to return with 17 and to skip the execution of the PFIL_TYPE_IFNET hooks. 3 . My 3rd an most important question is if you have ever tested the interface specific hooks attached with PFIL_TYPE_IFNET on QNX 7 and if they run in any of your lsm-xxx.so protocol drivers is there anything special to do (with respect to FIBs ???) to make our implementation work in the same way as on 6.x. Unfortunately we have no kernel code insight so all the results above are based on the analysis of machine code level debugging sessions. Best regards, Oliver Thu, 18 Apr 2019 13:05:00 GMT http://community.qnx.com/sf/go/post119667 Oliver Thimm 2019-04-18T13:05:00Z post119666: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119666 Hello Nick, > Most driver callbacks including the ifp->if_ioctl() expect to run in stack > context so you will need to use stk_context_callback_2() to get the code to > run in the stack context. i found a sample for the use of this call in the e1000 driver for QNX 7. I will try... Many thanks Michael Wed, 17 Apr 2019 13:09:06 GMT http://community.qnx.com/sf/go/post119666 Michael Tasche 2019-04-17T13:09:06Z post119663: Re: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119663 Yes, the threading model for io-pkt is highly complex and if you get it wrong you will likely get panics and crashes. Most driver callbacks including the ifp->if_ioctl() expect to run in stack context so you will need to use stk_context_callback_2() to get the code to run in the stack context. Stack context is single threaded, so if something else is using it then your code will have to wait. Equally you should ensure that your code doesn't block the stack context and adversely affect other operations. Tue, 16 Apr 2019 14:27:14 GMT http://community.qnx.com/sf/go/post119663 Nick Reilly 2019-04-16T14:27:14Z post119660: io-pkt thread context nightmare http://community.qnx.com/sf/go/post119660 Hi, we are just porting out lsm-Modul, we developed based on the lsm.nraw sample years ago in the good old QNX 6.x times. The Resource-Manager thread for our lsm-interface is created with nw_pthread_create(). This thread is calling ifp->if_ioctl() from time to time, to get the interface statistics. Under QNX 7 this results in an io-.pkt panic, which is fired by nic_mutex_trylock(), called from the e1000 driver. nic_mutex_trylock() tries to validate the run context by checking the curlwp against proc0. If they are equal, we panic(). As I learnd until now, io-pkt seems to emulate half of the netbsd-kernel to run the BSD TCPIP stack. My thread, generated by nw_pthread_create() seems to be the wrong one for calling the e1000 driver ioctl. How do I get the correct thread context, to allow it. pcreat(), kthread_create() , ...? Do I loose performance, if my thread is part of this emulated NETBSD-scheduler ? I am a bit confused. Please advice. Kind Regards Michael Tue, 16 Apr 2019 08:43:17 GMT http://community.qnx.com/sf/go/post119660 Michael Tasche 2019-04-16T08:43:17Z post119651: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119651 Thank you , Hugh. Thu, 11 Apr 2019 13:04:48 GMT http://community.qnx.com/sf/go/post119651 Leonid Khait 2019-04-11T13:04:48Z post119642: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119642 Take a look at the use message for the driver - pause_rx_enable and pause_tx_enable. On 2019-04-10, 8:59 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: Please show: how turn ON flow control of I211, I217 Ethernet Controllers by devnp-e1000.so driver? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119641 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 10 Apr 2019 13:01:37 GMT http://community.qnx.com/sf/go/post119642 Hugh Brown 2019-04-10T13:01:37Z post119641: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119641 Please show: how turn ON flow control of I211, I217 Ethernet Controllers by devnp-e1000.so driver? Wed, 10 Apr 2019 12:59:32 GMT http://community.qnx.com/sf/go/post119641 Leonid Khait 2019-04-10T12:59:32Z post119640: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119640 If you really want to stop fifo overruns, you should use flow control. Increasing the number of descriptors does reduce the probability of fifo overruns, but doesn't prevent them completely. On 2019-04-10, 8:26 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: I'm sorry, Huge! Correctly I understand that increasing the number of external file descriptors (by receive=nnn argument ) reduces the probability of an internal Rx FIFO buffer overran? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119639 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 10 Apr 2019 12:34:08 GMT http://community.qnx.com/sf/go/post119640 Hugh Brown 2019-04-10T12:34:08Z post119639: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119639 I'm sorry, Huge! Correctly I understand that increasing the number of external file descriptors (by receive=nnn argument ) reduces the probability of an internal Rx FIFO buffer overran? Wed, 10 Apr 2019 12:25:59 GMT http://community.qnx.com/sf/go/post119639 Leonid Khait 2019-04-10T12:25:59Z post119638: Re: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119638 There is documentation for ptpd and ptpd-avb in the SDP7 utilities documentation. On 2019-04-08, 8:28 AM, "Amit Walvekar" <community-noreply@qnx.com> wrote: I see. Thanks for the info really helped me out. Btw, is the a PTP daemon also provided? i noticed a couple of them, "ptpd" and "ptpd-avb". I cannot seem to get them working. Error: can't open /var/run/kernel_clock: Bad file descriptor. Is there any user guide? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119637 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 08 Apr 2019 12:35:09 GMT http://community.qnx.com/sf/go/post119638 Hugh Brown 2019-04-08T12:35:09Z post119637: Re: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119637 I see. Thanks for the info really helped me out. Btw, is the a PTP daemon also provided? i noticed a couple of them, "ptpd" and "ptpd-avb". I cannot seem to get them working. Error: can't open /var/run/kernel_clock: Bad file descriptor. Is there any user guide? Mon, 08 Apr 2019 12:28:21 GMT http://community.qnx.com/sf/go/post119637 Amit Walvekar 2019-04-08T12:28:21Z post119636: Re: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119636 If you look at the documentation for slog2, you will see that it isn't interrupt safe. On 2019-04-08, 5:01 AM, "Amit Walvekar" <community-noreply@qnx.com> wrote: Hello, You were right. I had slog2 statements inside the ISR. I removed them and it is working fine. Any idea why the log statements caused the issue? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119634 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 08 Apr 2019 11:34:16 GMT http://community.qnx.com/sf/go/post119636 Hugh Brown 2019-04-08T11:34:16Z post119635: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119635 Yes, it is so, as you can see from the description that you posted as well. On 2019-04-06, 12:09 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: From the description of the driver, I understand that increasing the file descriptor buffer by "receive=4096" argument will not solve the Squelch Test Error increase counter problem. Since this counter signals the problem is inside the NIC interface itself: the overran of its internal FIFO buffer. Is it so? --- From devnp-e1000.so Description: ... The SQE (Squelch Test Errors) counter — one of the fields reported by nicinfo — isn't applicable to devnp-e1000.so, so this driver uses it in a non-standard way. You can lose packets because: you ran out of descriptors (the NIC was able to buffer the packet, but there was no CPU RAM available) or: the NIC was unable to buffer the packet because it overran its internal Rx FIFO Other drivers add the two together, but this driver uses the SQE counter for internal Rx FIFO overruns, which generally indicate excessive bus latency, perhaps misconfigured link-level flow control, or even misconfigured Rx FIFO watermarks. ... _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119633 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 08 Apr 2019 11:33:11 GMT http://community.qnx.com/sf/go/post119635 Hugh Brown 2019-04-08T11:33:11Z post119634: Re: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119634 Hello, You were right. I had slog2 statements inside the ISR. I removed them and it is working fine. Any idea why the log statements caused the issue? Mon, 08 Apr 2019 09:01:46 GMT http://community.qnx.com/sf/go/post119634 Amit Walvekar 2019-04-08T09:01:46Z post119633: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119633 From the description of the driver, I understand that increasing the file descriptor buffer by "receive=4096" argument will not solve the Squelch Test Error increase counter problem. Since this counter signals the problem is inside the NIC interface itself: the overran of its internal FIFO buffer. Is it so? --- From devnp-e1000.so Description: ... The SQE (Squelch Test Errors) counter — one of the fields reported by nicinfo — isn't applicable to devnp-e1000.so, so this driver uses it in a non-standard way. You can lose packets because: you ran out of descriptors (the NIC was able to buffer the packet, but there was no CPU RAM available) or: the NIC was unable to buffer the packet because it overran its internal Rx FIFO Other drivers add the two together, but this driver uses the SQE counter for internal Rx FIFO overruns, which generally indicate excessive bus latency, perhaps misconfigured link-level flow control, or even misconfigured Rx FIFO watermarks. ... Sat, 06 Apr 2019 04:09:49 GMT http://community.qnx.com/sf/go/post119633 Leonid Khait 2019-04-06T04:09:49Z post119632: Re: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119632 Did you put debug statements in the ISR? AFAIK, the ravb driver is a working driver. On 2019-04-05, 3:19 AM, "Amit Walvekar" <community-noreply@qnx.com> wrote: Hello, I am using QNX 7.0 on an RCar-H3 starter kit. I got the bsp from the QNX software center. I am currently exploring the possibilities of AVB on QNX since I was successful to have it working on the same hardware with Linux. I noticed that a pre-built devnp-ravb driver was already part of the bsp and is functional for legacy ethernet traffic. I built devnp-ravb driver source code with some debug statements. I load it with io-pkt and when i try to bring up the interface, there is a kernel fault with the below backtrace and a reboot. Shutdown[0,0] S/C/F=8/9/9 C/D=ffffff80600484d8/ffffff8060102680 state(1)= 1 QNX Version 7.0.0 Release 2017/02/14-16:05:38EST KSB:ffffff808304c000 $URL: http://svn.ott.qnx.com/product/branches/7.0.0/trunk/services/system/ker/timestamp.c $ [0]PID-TID=1-1? P/T FL=08019001/0c000000 "proc/boot/procnto-smp-instr" [0]ASPACE PID=213022 PF=08401010 "proc/boot/io-pkt-v6-hc" I am having doubts whether the delivered source code actually works or is more of a reference. Also the size of the prebuilt driver(68kB) is way less than the one i manually built(365kB)(Maybe because i did not strip it). Can anyone help me out with this? Is there also a guide which can help me with the driver's APIs? Thanks, Amit _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119631 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Fri, 05 Apr 2019 11:34:34 GMT http://community.qnx.com/sf/go/post119632 Hugh Brown 2019-04-05T11:34:34Z post119631: devnp-ravb leading to kernal fault http://community.qnx.com/sf/go/post119631 Hello, I am using QNX 7.0 on an RCar-H3 starter kit. I got the bsp from the QNX software center. I am currently exploring the possibilities of AVB on QNX since I was successful to have it working on the same hardware with Linux. I noticed that a pre-built devnp-ravb driver was already part of the bsp and is functional for legacy ethernet traffic. I built devnp-ravb driver source code with some debug statements. I load it with io-pkt and when i try to bring up the interface, there is a kernel fault with the below backtrace and a reboot. Shutdown[0,0] S/C/F=8/9/9 C/D=ffffff80600484d8/ffffff8060102680 state(1)= 1 QNX Version 7.0.0 Release 2017/02/14-16:05:38EST KSB:ffffff808304c000 $URL: http://svn.ott.qnx.com/product/branches/7.0.0/trunk/services/system/ker/timestamp.c $ [0]PID-TID=1-1? P/T FL=08019001/0c000000 "proc/boot/procnto-smp-instr" [0]ASPACE PID=213022 PF=08401010 "proc/boot/io-pkt-v6-hc" I am having doubts whether the delivered source code actually works or is more of a reference. Also the size of the prebuilt driver(68kB) is way less than the one i manually built(365kB)(Maybe because i did not strip it). Can anyone help me out with this? Is there also a guide which can help me with the driver's APIs? Thanks, Amit Fri, 05 Apr 2019 07:19:47 GMT http://community.qnx.com/sf/go/post119631 Amit Walvekar 2019-04-05T07:19:47Z post119627: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119627 Hi John, Yes, it's a run to completion model with respect to other co-routines unless you yield. No preemption or time-slicing with respect to other co-routines. This is why you only need to consider locking when yielding and why you need to consider yielding when blocking. Of course this is all still running on a POSIX thread which may be preempted or time-sliced with threads in other processes, so you aren't guaranteed to be on the processor all the time! Regards, Nick. Mon, 01 Apr 2019 13:12:11 GMT http://community.qnx.com/sf/go/post119627 Nick Reilly 2019-04-01T13:12:11Z post119624: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119624 Hi Nick, > > nic_mutex() is used to provide mutex-like locking for the stack context > operations - an actual mutex cannot be used. > > As I mentioned, the low level call that yields stack context is ltsleep() > which can be wrapped in tsleep(). In driver code it is usually the nic_delay() > call that yields context or blockop(). Does this mean that if a stack callback does not yield it will run to completion? In other words, there is no preemption or time slicing in the scheduling of co-routines? Sat, 30 Mar 2019 18:02:32 GMT http://community.qnx.com/sf/go/post119624 John Efstathiades 2019-03-30T18:02:32Z post119622: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119622 Hi John, In general it depends on how long the wait is. The rule of thumb that I use is < 1ms then you can busy wait, otherwise use nic_delay(). When you are busy waiting you are blocking io-pkt stack context so you are potentially affecting traffic on other interfaces / Unix Domain Sockets etc. Regards, Nick Thu, 28 Mar 2019 17:19:11 GMT http://community.qnx.com/sf/go/post119622 Nick Reilly 2019-03-28T17:19:11Z post119621: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119621 Hi Nick, Thanks again for the clear and detailed information. I take the point re nic_delay(). However, some drivers I've seen use nic_delay when waiting for register bits to change, e.g. following a reset, or waiting for the MII bus to become free at the start of the MII read/write routines. Is that still preferable to doing a busy-wait using nanosleep_ns()? Regards, John Thu, 28 Mar 2019 16:55:41 GMT http://community.qnx.com/sf/go/post119621 John Efstathiades 2019-03-28T16:55:41Z post119620: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119620 Hi John, While the start callback is usually called from stack context, there are 3 scenarios where it is called from an interrupt event handler thread: 1) Bridging. The Ethernet frame is received by the receiving interface's Rx interrupt event handler. The call to ifp->if_input() will perform a bridging lookup and call the output interface's start callback all within this same context. 2) Fast Forwarding. Similar to the Bridging scenario but with an IP flow. 3) Tx after Tx descriptors are full. The start callback is usually called by stack context everytime that a packet is added to the interface Tx queue. The start callback populates the Tx descriptors, but if it runs out of room then it sets IFF_OACTIVE, enables Tx completion interrupts and returns. When a Tx completion interrupt fires then the Tx interrupt event handler will run. This should take the ifp->if_snd_ex lock, reap the used descriptors, and continue to populate the Tx descriptors now that it has more space. It may run out of space again in which case this pattern repeats. Eventually it will drain the Tx queue and it should clear IFF_OACTIVE and disable the Tx completion interrupt (to avoid wasting CPU). Now that IFF_OACTIVE is clear then the stack context will call the driver start callback each time it places a packet to be transmitted on the Tx queue. Just to clarify one thing about the ifp->if_snd_ex. It is actually the lock for the interface Tx queue, so io-pkt stack context takes the lock each time it wants to add a packet on to the queue. We can reuse it in the stop callback to indicate that the start callback isn't running because the start callback is always called with the lock held. You are correct about nic_delay() when it yields stack context. The stack context can now perform any other stack context operations including driver callbacks, timer callouts etc. N.B. This includes calling the detach callback, which invariably leads to a crash after the detach callback has removed the interface, the delay expires and then the stack context tries to switch back to running code that may no longer be in memory, or if it is then it refers to an interface that no longer exists and the structures have been freed. The most painful to debug crashes come from when that freed memory is actually reused for another interface. nic_mutex() is used to provide mutex-like locking for the stack context operations - an actual mutex cannot be used. As I mentioned, the low level call that yields stack context is ltsleep() which can be wrapped in tsleep(). In driver code it is usually the nic_delay() call that yields context or blockop(). A note on delays: io-pkt has a best timer granularity of 8.5ms, but it can extend out to 50ms if no traffic is running. This means that a call to nic_delay(1) can be a 50ms delay rather than a 1ms delay. If there are many such calls in succession this can lead to a driver taking many seconds to setup some hardware. In these scenarios it is best to call blockop(). This will yield stack context at that point and execute the code in the io-pkt main thread (thread 1). nic_delay() detects the context it is running under and will switch to using normal sleep calls with the granularity of the system timer (default 1ms). At the completion of the blockop() then the original stack context routine is placed in a ready state - although note that if the stack context is currently performing some other operation it may not run immediately. Regards, Nick. Thu, 28 Mar 2019 15:02:54 GMT http://community.qnx.com/sf/go/post119620 Nick Reilly 2019-03-28T15:02:54Z post119617: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119617 Hi Nick, > The ifp->if_snd_ex is only held by the stack when the driver start callback is > called, the stop callback needs to lock it first to ensure that the driver > start isn't currently running. How is it possible for the stack to call the stop callback if the start callback is still running? Doesn't that imply another thread of execution? I thought the driver callbacks (and callouts) were invoked in the stack context. > nic_mutex_lock() is used to synchronise across multiple io-pkt coroutines > running on the stack context thread. This is necessary any time that the stack > context thread can yield - at the lowest level it is a call to ltsleep(), but > at the driver level this is often wrapped up in a nic_delay() call. Does this mean that if a driver callback uses nic_delay() the stack context can invoke another driver callback (or timer callout)? The nic_mutex is therefore used to protect driver critical resources in this situation - is that correct? Apart from nic_delay() what other calls result in a callback yielding to another stack coroutine that could result in the stack entering another driver callback? Regards, John Thu, 28 Mar 2019 09:10:02 GMT http://community.qnx.com/sf/go/post119617 John Efstathiades 2019-03-28T09:10:02Z post119616: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119616 The ifp->if_snd_ex is only held by the stack when the driver start callback is called, the stop callback needs to lock it first to ensure that the driver start isn't currently running. A new mutex would be initialised with NW_EX_INIT() and destroyed with NW_EX_DESTROY(). nic_mutex_lock() is used to synchronise across multiple io-pkt coroutines running on the stack context thread. This is necessary any time that the stack context thread can yield - at the lowest level it is a call to ltsleep(), but at the driver level this is often wrapped up in a nic_delay() call. Remember that io-pkt stack context is single threaded hence you do need to yield rather than sitting in a blocking call. If you don't yield then you can impact other stack context operations such as traffic on other interfaces. Wed, 27 Mar 2019 18:47:23 GMT http://community.qnx.com/sf/go/post119616 Nick Reilly 2019-03-27T18:47:23Z post119615: Re: MDI API description in QNX7 documentation http://community.qnx.com/sf/go/post119615 I can't find an up to date example in a BSP so I've attached it here. Wed, 27 Mar 2019 18:29:55 GMT http://community.qnx.com/sf/go/post119615 Nick Reilly 2019-03-27T18:29:55Z post119614: Re: MDI API description in QNX7 documentation http://community.qnx.com/sf/go/post119614 Hi Nick, Thanks, I found some 6.3.2 documentation online which covers many of the routines, even though it is quite old. Is the smsc9500 driver available in one the reference BSPs? Regards, John Wed, 27 Mar 2019 15:19:23 GMT http://community.qnx.com/sf/go/post119614 John Efstathiades 2019-03-27T15:19:23Z post119613: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119613 Hi Nick, Thanks - it's beginning to make more sense now. I see the stack already uses ifp->if_snd_ex to serialise access to transmit resources. It appears this mutex is held when the start and stop callbacks are entered - is that correct? Are you saying I can create additional mutex if I need to serialise access to other driver resources in stack context and interrupt processing threads? Should the new mutex be initialised using NW_EX_INIT? What are the rules using nic_mutex_lock()? I mean how you do recognise situations where it might be required? Regards, John Wed, 27 Mar 2019 14:47:50 GMT http://community.qnx.com/sf/go/post119613 John Efstathiades 2019-03-27T14:47:50Z post119612: Re: MDI API description in QNX7 documentation http://community.qnx.com/sf/go/post119612 Hi John, Unfortunately there isn't a document describing this (and many other io-pkt APIs). I tend to refer people to the smsc9500 driver as one where the MDI driver API has been thoroughly tested and is known to work well. Regards, Nick Wed, 27 Mar 2019 12:41:21 GMT http://community.qnx.com/sf/go/post119612 Nick Reilly 2019-03-27T12:41:21Z post119611: Re: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119611 Hi John, Yes, the stack context thread and multiple interrupt event handling threads can run concurrently. nic_mutex_lock() is for locking the multiple coroutines that can run on the stack context thread, it will panic() if called from an interrupt event handling thread. Take a look at NW_SIGLOCK() / NW_SIGLOCK_P() for providing locking between an interrupt event handling thread and a stack context thread. Regards, Nick Wed, 27 Mar 2019 12:37:59 GMT http://community.qnx.com/sf/go/post119611 Nick Reilly 2019-03-27T12:37:59Z post119609: MDI API description in QNX7 documentation http://community.qnx.com/sf/go/post119609 Is the MDI API described in the QNX7 documentation set? It does not appear to be part of the QNX Neutrino RTOS C Library Reference. Is there a separate networking API reference document? Thanks, John Wed, 27 Mar 2019 11:11:01 GMT http://community.qnx.com/sf/go/post119609 John Efstathiades 2019-03-27T11:11:01Z post119608: Concurrency and mutual exclusion in io-pkt network drivers http://community.qnx.com/sf/go/post119608 Hello, I am developing a driver for a PCI-based network interface for QNX 7. I have read the Writing Network Drivers for io-pkt section in the Core Networking Stack User's Guide and looked at the ravb and mx6x driver source. It is still not entirely clear to me if and when there may be more than one thread executing the driver code. I don't anticipate having to create my own separate driver threads. My device has MSI-X interrupts so I plan to have multiple ISRs, one for each MSI-X interrupt source. I would therefore expect that multiple interrupt event handling threads could run concurrently. If that is possible I will need to protect driver resources, including device registers, from concurrent access. Can the io-pkt stack context thread run concurrently with the interrupt event handling threads? For example, can the stack queue packets for transmission or update the link speed, etc, while receive and transmit interrupt events are being processed by io-pkt worker threads? The drivers source I have looked at has almost no locking. There is a little use of nic_mutex_lock/unlock but only in some peripheral cases. Is nic_mutex_lock/unlock the right mechanism to provide access serialisation in the driver code? BTW, I cannot find mention of nic_mutex_lock/unlock and nic_delay in the documentation. Where are these functions described? If anyone can shed some light on this it would be greatly appreciated. Thanks, John Wed, 27 Mar 2019 11:04:07 GMT http://community.qnx.com/sf/go/post119608 John Efstathiades 2019-03-27T11:04:07Z post119605: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119605 Please try the attached io-pkt-v4-hc. On 2019-03-25, 8:25 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: Occasionally we have a problem removing io-pkt-v4-hc from execution with message ... Process 286737 (io-pkt-v4-hc) terminated SIGSEGV code=1 fltno=11 ip=08101485 (io-pkt-v4-hc@m_free_wtp+0x15) ref=82d1b0ec. ... Please explain what could be the reason? Could there be a problem with devnp-e1000.so? Could there be a reason for the malfunction of the Etnernet interface chip ? Core dump , use -i io-pkt-v4-hc and Error message photos attached to this Message. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119603 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 26 Mar 2019 15:02:33 GMT http://community.qnx.com/sf/go/post119605 Hugh Brown 2019-03-26T15:02:33Z post119604: Re: SIOCGDRVSPEC ioctl: Operation not permitted http://community.qnx.com/sf/go/post119604 Hello again, I changed the socket type from AF_UNIX to AF_INET and the ioctl is reaching the driver. Tue, 26 Mar 2019 05:33:35 GMT http://community.qnx.com/sf/go/post119604 Amit Walvekar 2019-03-26T05:33:35Z post119603: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119603 Occasionally we have a problem removing io-pkt-v4-hc from execution with message ... Process 286737 (io-pkt-v4-hc) terminated SIGSEGV code=1 fltno=11 ip=08101485 (io-pkt-v4-hc@m_free_wtp+0x15) ref=82d1b0ec. ... Please explain what could be the reason? Could there be a problem with devnp-e1000.so? Could there be a reason for the malfunction of the Etnernet interface chip ? Core dump , use -i io-pkt-v4-hc and Error message photos attached to this Message. Mon, 25 Mar 2019 12:25:08 GMT http://community.qnx.com/sf/go/post119603 Leonid Khait 2019-03-25T12:25:08Z post119601: SIOCGDRVSPEC ioctl: Operation not permitted http://community.qnx.com/sf/go/post119601 Hello, I am using Neutrino 7.0 on an RCAR H3 starter kit. I wanted to check the functionality of the ravb driver provided with the BSP. The ioctl calls for driver specific commands return error code 103: Operation not permitted. I tried another method in this forum using IOV's and MsgSendv_r but they return the same error aswell. Here is the code snippet i use. int sock = socket(AF_UNIX, SOCK_DGRAM, IPPROTO_IP); if (sock == -1) { return -1; }; struct ifdrv drv; ptp_time_t ts; drv.ifd_cmd = PTP_GET_TIME; drv.ifd_len = sizeof(ts); drv.ifd_data = &ts; strcpy(drv.ifd_name,"ravb0"); int retVal = ioctl(sock, SIOCGDRVSPEC,&drv); if(retVal) { cout << "error: " << errno << endl; cout << strerror(errno) << endl; } I checked the driver code aswell and it supports the command. Can you please help? Mon, 25 Mar 2019 10:56:11 GMT http://community.qnx.com/sf/go/post119601 Amit Walvekar 2019-03-25T10:56:11Z post119596: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119596 > Dear Hugh, > your new driver version works mach more stably on the ping test, > especially with the argument receave = 4096. > > We will check if this version solves network problems that we occasionally > have. > > Thank you very much Thu, 21 Mar 2019 06:27:38 GMT http://community.qnx.com/sf/go/post119596 Leonid Khait 2019-03-21T06:27:38Z post119595: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119595 Dear Hugh, your new driver version works mach more stably on the ping test, especially with the argument reseave = 4096. We will check if this version solves network problems that we occasionally have. Thank you very much Thu, 21 Mar 2019 05:16:15 GMT http://community.qnx.com/sf/go/post119595 Leonid Khait 2019-03-21T05:16:15Z post119593: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119593 Please try the attached driver. Also realize that you are trying to send 200000 messages in 1 second, so it might be an idea to increase the receive and transmit buffers with the "receive=nnn" and "transmit=nnn" command line options to the driver. The defaults for both are 512. On 2019-03-20, 12:16 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: #use -i devnp-e1000.so NAME=devnp-e1000.so DESCRIPTION=Driver for Intel 82544 Gigabit Ethernet controllers DATE=2015/11/30-12:50:20-EST STATE=stable HOST=gusbuild8 USER=builder VERSION=1344 TAGID=PSP_networking_br650_be650SP1 Thanks in advance, Leonid _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119590 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 20 Mar 2019 12:17:52 GMT http://community.qnx.com/sf/go/post119593 Hugh Brown 2019-03-20T12:17:52Z post119590: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119590 #use -i devnp-e1000.so NAME=devnp-e1000.so DESCRIPTION=Driver for Intel 82544 Gigabit Ethernet controllers DATE=2015/11/30-12:50:20-EST STATE=stable HOST=gusbuild8 USER=builder VERSION=1344 TAGID=PSP_networking_br650_be650SP1 Thanks in advance, Leonid Wed, 20 Mar 2019 04:16:31 GMT http://community.qnx.com/sf/go/post119590 Leonid Khait 2019-03-20T04:16:31Z post119589: Re: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119589 Please post the output from "use -i devnp-e1000.so". On 2019-03-13, 11:29 PM, "Leonid Khait" <community-noreply@qnx.com> wrote: When in use QNX6.5 and Intel Ethernet card ... #io-pkt-v4-hc -de1000 #ifconfig wm0 192.168.1.1 up ... and we try to send a lot of pings through network #ping -l 100000 -s1590 192.168.1.2 then io-pkt-v4-hs stack becomes blocked. When we try to use another RTL Ethernet card ... #io-pkt-v4-hc -drtl #ifconfig en0 192.168.1.1 up #ping -l 100000 -s1590 192.168.1.2 then no such blocking occurs. What could be the reason? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119579 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 19 Mar 2019 11:52:04 GMT http://community.qnx.com/sf/go/post119589 Hugh Brown 2019-03-19T11:52:04Z post119579: io-pkt-v4-hc blocking http://community.qnx.com/sf/go/post119579 When in use QNX6.5 and Intel Ethernet card ... #io-pkt-v4-hc -de1000 #ifconfig wm0 192.168.1.1 up ... and we try to send a lot of pings through network #ping -l 100000 -s1590 192.168.1.2 then io-pkt-v4-hs stack becomes blocked. When we try to use another RTL Ethernet card ... #io-pkt-v4-hc -drtl #ifconfig en0 192.168.1.1 up #ping -l 100000 -s1590 192.168.1.2 then no such blocking occurs. What could be the reason? Thu, 14 Mar 2019 03:29:12 GMT http://community.qnx.com/sf/go/post119579 Leonid Khait 2019-03-14T03:29:12Z post119564: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119564 I have no idea as to what is happening, so you will have to debug your code. On 2019-03-05, 10:14 AM, "Vijaya V" <community-noreply@qnx.com> wrote: Thanks a lot for your response. I am testing with these changes. I have applied storm such that receive disable and enable happens frequently. It is working for quiet some time. And then there is a kind of hang on communications side. I can see trough print statement that receive is enabled. My application also seems to be running. The problem is occurring randomly. How do I debug this situation. Is there any tool or utility? Thanks & Regards Vijaya _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119563 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 05 Mar 2019 15:36:49 GMT http://community.qnx.com/sf/go/post119564 Hugh Brown 2019-03-05T15:36:49Z post119563: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119563 Thanks a lot for your response. I am testing with these changes. I have applied storm such that receive disable and enable happens frequently. It is working for quiet some time. And then there is a kind of hang on communications side. I can see trough print statement that receive is enabled. My application also seems to be running. The problem is occurring randomly. How do I debug this situation. Is there any tool or utility? Thanks & Regards Vijaya Tue, 05 Mar 2019 15:14:51 GMT http://community.qnx.com/sf/go/post119563 Vijaya V 2019-03-05T15:14:51Z post119562: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119562 *(base + MPC_DMACTRL) &= ~(DMACTRL_GRS | DMACTRL_GTS); On 2019-03-05, 8:46 AM, "Vijaya V" <community-noreply@qnx.com> wrote: I think the code you provided should be added while disabling receive. Similarly is there anything to be added while enabling receive? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119561 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 05 Mar 2019 13:48:53 GMT http://community.qnx.com/sf/go/post119562 Hugh Brown 2019-03-05T13:48:53Z post119561: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119561 I think the code you provided should be added while disabling receive. Similarly is there anything to be added while enabling receive? Tue, 05 Mar 2019 13:46:17 GMT http://community.qnx.com/sf/go/post119561 Vijaya V 2019-03-05T13:46:17Z post119560: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119560 You might want to add the following as well: if ((*(base + MPC_DMACTRL) & DMACTRL_GRS) == 0) { /* Graceful receive stop and wait for completion. */ *(base + MPC_DMACTRL) |= DMACTRL_GRS; timeout = MPC_TIMEOUT; do { nanospin_ns (10); if (! --timeout) break; status = *(base + MPC_IEVENT); } while ((status & IEVENT_GRSC) != IEVENT_GRSC); if (!timeout) { log(LOG_ERR, "%s(): DMA GRS stop failed", __FUNCTION__); } } On 2019-03-05, 5:54 AM, "Vijaya V" <community-noreply@qnx.com> wrote: I got hold of the code. For enabling and disabling Rx, I am trying below code: Disable Rx: *(base + MPC_MACCFG1) &= ~MACCFG1_RXEN; Enable Rx: *(base + MPC_MACCFG1) |= MACCFG1_RXEN; Is this sufficient or do I need to modify any other registers. Thanks& Regards Vijaya _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119559 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 05 Mar 2019 12:35:32 GMT http://community.qnx.com/sf/go/post119560 Hugh Brown 2019-03-05T12:35:32Z post119559: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119559 I got hold of the code. For enabling and disabling Rx, I am trying below code: Disable Rx: *(base + MPC_MACCFG1) &= ~MACCFG1_RXEN; Enable Rx: *(base + MPC_MACCFG1) |= MACCFG1_RXEN; Is this sufficient or do I need to modify any other registers. Thanks& Regards Vijaya Tue, 05 Mar 2019 10:54:18 GMT http://community.qnx.com/sf/go/post119559 Vijaya V 2019-03-05T10:54:18Z post119551: Re: Does QNX support Netlink messages to receive interface and route events http://community.qnx.com/sf/go/post119551 We do not support Linux Netlink, but we do support the BSD Routing Socket. This will give you route and interface events. Thu, 28 Feb 2019 15:40:57 GMT http://community.qnx.com/sf/go/post119551 Nick Reilly 2019-02-28T15:40:57Z post119550: Does QNX support Netlink messages to receive interface and route events http://community.qnx.com/sf/go/post119550 Similar to linux netlink messages ,does QNX support route and interface events mechanism Thu, 28 Feb 2019 06:37:10 GMT http://community.qnx.com/sf/go/post119550 Harish Ambati(deleted) 2019-02-28T06:37:10Z post119548: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119548 It is changes to driver code. Wed, 27 Feb 2019 14:44:05 GMT http://community.qnx.com/sf/go/post119548 Nick Reilly 2019-02-27T14:44:05Z post119547: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119547 How do I achieve this custom ioctl ()? Is there a sample code for shutting off receive in hardware? Does it require changes to driver code? Appreciate your help on this. Thanks & Regards Vijaya Wed, 27 Feb 2019 14:43:25 GMT http://community.qnx.com/sf/go/post119547 Vijaya V 2019-02-27T14:43:25Z post119546: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119546 I suggest contacting your support contact, they should be able to provide you with the latest version of the driver code. You would need to add a custom ioctl() that shuts off the receive in hardware to get it to drop packets with the minimum CPU load if you are worried about network storms. Any other option is still going to have some CPU usage as it runs the Rx descriptor ring. Wed, 27 Feb 2019 14:24:04 GMT http://community.qnx.com/sf/go/post119546 Nick Reilly 2019-02-27T14:24:04Z post119545: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119545 Thanks for your reply. I am trying to deal with network storm conditions. I need to disable Rx when number of packets received crosses a limit in a given time. This is to give application code required time to process already received packets. Taking down the interface (using ifconfig down etc) is not an option as it would disturb the transmit also. 1) What are your suggestions to handle storm conditions? Any other ways other than taking down the interface? 2) I have only devnp-mpc85xx.so file. How do I get latest driver code? Thanks & Regards Vijaya Wed, 27 Feb 2019 14:14:43 GMT http://community.qnx.com/sf/go/post119545 Vijaya V 2019-02-27T14:14:43Z post119544: Re: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119544 There's no way in the standard driver to just disable receive and leave transmit active. If you need to do this then you would need to customise the driver and drop the received packets. If you need to stop receiving and also transmitting then you could do the equivalent of "ifconfig ... down". This is an ioctl() for SIOCGIFFLAGS, clear IFF_UP from the returned flags and then do ioctl() for SIOCSIFFLAGS. Tue, 26 Feb 2019 14:25:15 GMT http://community.qnx.com/sf/go/post119544 Nick Reilly 2019-02-26T14:25:15Z post119543: How to disable receive in mpc85xx http://community.qnx.com/sf/go/post119543 Hi, I need a help. I am using devnp-mpc85xx.so driver and stack version is io-pkt-v6-hc. Based on certain conditions I need to disable receiving packets for sometime and then enable it later. I wanted to achieve it in my application code. Can someone help me what should I do to achieve it. Thanks & Regards Vijaya Tue, 26 Feb 2019 14:18:16 GMT http://community.qnx.com/sf/go/post119543 Vijaya V 2019-02-26T14:18:16Z post119360: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119360 You can try increasing the number of receive buffers with the "receive=nnn" command line option, where nnn can be between 64 and 4096. The default used to be 128 receive buffers, so you can try anything higher than that. On 2018-12-14, 9:01 PM, "Leonid Khait" <community-noreply@qnx.com> wrote: Sorry my mistake: ... Driver devnp-e100.so vid = 0x8086, did = 0x15b3 ... So, is it hardware problem or driver e1000 error? That problem happens too infrequently. How to solve that, Thank you. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119359 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 17 Dec 2018 12:45:19 GMT http://community.qnx.com/sf/go/post119360 Hugh Brown 2018-12-17T12:45:19Z post119359: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119359 Sorry my mistake: ... Driver devnp-e100.so vid = 0x8086, did = 0x15b3 ... So, is it hardware problem or driver e1000 error? That problem happens too infrequently. How to solve that, Thank you. Sat, 15 Dec 2018 02:01:01 GMT http://community.qnx.com/sf/go/post119359 Leonid Khait 2018-12-15T02:01:01Z post119358: Re: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119358 Squelch test errors are from the missed packet count register (receive overruns) of the e1000. On 2018-12-14, 2:21 AM, "Leonid Khait" <community-noreply@qnx.com> wrote: #nicinfo wm0 shows an increase in the number of errors ... Squelch Test errors ....... ... Which leads to a failure in the qnet and Tcpip protocols. Driver devnp-qnet.so vid = 0x8086, did = 0x15b3 NAME = devnp-e1000.so DESCRIPTION = Driver for Intel 82544 Gigabit Ethernet controllers DATE = 2015/11 / 30-12: 50: 20-EST STATE = stable HOST = gusbuild8 USER = builder VERSION = 1344 TAGID = PSP_networking_br650_be650SP1 What could be the causes of failures, what could be the recommendations? Thanks _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119357 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Fri, 14 Dec 2018 13:42:31 GMT http://community.qnx.com/sf/go/post119358 Hugh Brown 2018-12-14T13:42:31Z post119357: devnp-e1000.so Squelch Test errors http://community.qnx.com/sf/go/post119357 #nicinfo wm0 shows an increase in the number of errors ... Squelch Test errors ....... ... Which leads to a failure in the qnet and Tcpip protocols. Driver devnp-qnet.so vid = 0x8086, did = 0x15b3 NAME = devnp-e1000.so DESCRIPTION = Driver for Intel 82544 Gigabit Ethernet controllers DATE = 2015/11 / 30-12: 50: 20-EST STATE = stable HOST = gusbuild8 USER = builder VERSION = 1344 TAGID = PSP_networking_br650_be650SP1 What could be the causes of failures, what could be the recommendations? Thanks Fri, 14 Dec 2018 07:21:09 GMT http://community.qnx.com/sf/go/post119357 Leonid Khait 2018-12-14T07:21:09Z post119305: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119305 That's fine. Thanks again for all your help. Thu, 22 Nov 2018 15:48:21 GMT http://community.qnx.com/sf/go/post119305 John Scarrott 2018-11-22T15:48:21Z post119304: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119304 At this stage I think that is best that you contact your QNX Sales rep to make arrangements to get this problem resolved. Thanks, Hugh. On 2018-11-22, 10:31 AM, "John Scarrott" <community-noreply@qnx.com> wrote: Still no luck. Slog here: Jan 01 00:01:14.513 iopkt.36871 0 -----ONLINE----- Jan 01 00:01:14.514 iopkt.36871 main_buffer* 0 tcpip starting Jan 01 00:01:14.514 iopkt.36871 main_buffer 0 Unable to open /dev/random: errno: 2 Jan 01 00:01:14.514 iopkt.36871 main_buffer 0 Falling back on internal pseudo random generator Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 initializing IPsec... Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 done Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 IPsec: Initialized Security Association Processing. Jan 01 00:01:14.519 iopkt.36871 main_buffer 0 devnp-e1000.so (null) Jan 01 00:01:14.521 io_pkt_v6_hc.36871 0 -----ONLINE----- Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log* 0 INFO ,1,0,4 [36871:2]: SLOG module load successful for pid 36871 (io-pkt-v6-hc) Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_hw-Intel_x86_APL.so Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_hw-Intel_x86_APL.so, v2.1 loaded successfully Jan 01 00:01:14.522 io_pkt_v6_hc.36871..0 0 -----ONLINE----- Jan 01 00:01:14.522 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_debug2.so Jan 01 00:01:14.522 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_debug2.so, v2.1 loaded successfully Jan 01 00:01:14.522 io_pkt_v6_hc.36871..0 pci_dbg* 0 DEBUG,1,1,4 [36871:2]: find_ecam_base(): trying offset 0x60 for vid/did 8086/5af0 Jan 01 00:01:14.523 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,1,4 [36871:2]: find_ecam_base(): found ecam base 0xe0000000 Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 0 -----ONLINE----- Jan 01 00:01:14.534 iopkt.36871 main_buffer 0 wm0 Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Successful connection to PCI server on /dev/pci Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Capability modules will be searched for in directory /lib/dll/pci Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_cap-0x10-8086157b.so Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x10.so Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 slog* 0 *** found B5:D0:F0 Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 slog 0 *** device is supported, attaching Jan 01 00:01:14.534 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:3]: msg_handler(msg type: 33, connect_entry: 806e0f8) reply_len = 8, OK [PCI_ERR_OK] Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_cap-0x10.so, v2.0 loaded successfully Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x10-8086157b.so ... not found Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x10.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_cap-0x11-8086157b.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x11.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871..1 slog 0 Unable to enable PCIe capabilities, Requested Operation, Condition Or Data Already Exists [PCI_ERR_EALREADY] Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_cap-0x11.so, v2.0 loaded successfully Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11.so Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820e088) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 3 disabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 4 disabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X capabilities enabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 0 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 1 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 2 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 *** mmap_device_memory @ 81100000 to 180097000 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 *** 3 irqs Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 translate 0x17a112000, len 1000 #slay io-pkt-v6-hc # io-pkt-v6-hc -de1000 -vvvvv # ifconfig wm0 up *plugged in cable here* # nicinfo wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 00E04B 6564AD Current Physical Node ID ................... 00E04B 6564AD Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x100 Hardware Interrupt ......................... 0x101 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0x81100000 - 0x8111ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 2 Bytes Received OK .......................... 618 Broadcast Packets Received OK .............. 2 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119303 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 22 Nov 2018 15:44:28 GMT http://community.qnx.com/sf/go/post119304 Hugh Brown 2018-11-22T15:44:28Z post119303: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119303 Still no luck. Slog here: Jan 01 00:01:14.513 iopkt.36871 0 -----ONLINE----- Jan 01 00:01:14.514 iopkt.36871 main_buffer* 0 tcpip starting Jan 01 00:01:14.514 iopkt.36871 main_buffer 0 Unable to open /dev/random: errno: 2 Jan 01 00:01:14.514 iopkt.36871 main_buffer 0 Falling back on internal pseudo random generator Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 initializing IPsec... Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 done Jan 01 00:01:14.516 iopkt.36871 main_buffer 0 IPsec: Initialized Security Association Processing. Jan 01 00:01:14.519 iopkt.36871 main_buffer 0 devnp-e1000.so (null) Jan 01 00:01:14.521 io_pkt_v6_hc.36871 0 -----ONLINE----- Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log* 0 INFO ,1,0,4 [36871:2]: SLOG module load successful for pid 36871 (io-pkt-v6-hc) Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_hw-Intel_x86_APL.so Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.521 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_hw-Intel_x86_APL.so, v2.1 loaded successfully Jan 01 00:01:14.522 io_pkt_v6_hc.36871..0 0 -----ONLINE----- Jan 01 00:01:14.522 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_debug2.so Jan 01 00:01:14.522 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_debug2.so, v2.1 loaded successfully Jan 01 00:01:14.522 io_pkt_v6_hc.36871..0 pci_dbg* 0 DEBUG,1,1,4 [36871:2]: find_ecam_base(): trying offset 0x60 for vid/did 8086/5af0 Jan 01 00:01:14.523 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,1,4 [36871:2]: find_ecam_base(): found ecam base 0xe0000000 Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 0 -----ONLINE----- Jan 01 00:01:14.534 iopkt.36871 main_buffer 0 wm0 Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Successful connection to PCI server on /dev/pci Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Capability modules will be searched for in directory /lib/dll/pci Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_cap-0x10-8086157b.so Jan 01 00:01:14.534 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x10.so Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 slog* 0 *** found B5:D0:F0 Jan 01 00:01:14.534 io_pkt_v6_hc.36871..1 slog 0 *** device is supported, attaching Jan 01 00:01:14.534 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:3]: msg_handler(msg type: 33, connect_entry: 806e0f8) reply_len = 8, OK [PCI_ERR_OK] Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_cap-0x10.so, v2.0 loaded successfully Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x10-8086157b.so ... not found Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x10.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Attempt module load of /lib/dll/pci/pci_cap-0x11-8086157b.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x11.so Jan 01 00:01:14.535 io_pkt_v6_hc.36871..1 slog 0 Unable to enable PCIe capabilities, Requested Operation, Condition Or Data Already Exists [PCI_ERR_EALREADY] Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: Module is compatible with Library ver 2.0 Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,0,4 [36871:2]: Module /lib/dll/pci/pci_cap-0x11.so, v2.0 loaded successfully Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:01:14.536 io_pkt_v6_hc.36871 pci_log 0 INFO ,1,1,4 [36871:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11.so Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820e088) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..0 pci_dbg 0 DEBUG,1,3,4 [36871:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 3 disabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 4 disabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X capabilities enabled Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 0 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 1 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 MSI-X interrupt entry 2 unmasked Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 *** mmap_device_memory @ 81100000 to 180097000 Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 *** 3 irqs Jan 01 00:01:14.536 io_pkt_v6_hc.36871..1 slog 0 translate 0x17a112000, len 1000 #slay io-pkt-v6-hc # io-pkt-v6-hc -de1000 -vvvvv # ifconfig wm0 up *plugged in cable here* # nicinfo wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 00E04B 6564AD Current Physical Node ID ................... 00E04B 6564AD Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x100 Hardware Interrupt ......................... 0x101 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0x81100000 - 0x8111ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 2 Bytes Received OK .......................... 618 Broadcast Packets Received OK .............. 2 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 Thu, 22 Nov 2018 15:30:54 GMT http://community.qnx.com/sf/go/post119303 John Scarrott 2018-11-22T15:30:54Z post119302: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119302 One more test please! Remove the network cable, start the driver with "verbose=3" on the command line, "ifconfig wm0 up" and then plug in the network cable. Run "nicinfo wm0" to see if the link is up. If the link still doesn't come up, then I will need the hardware to debug this problem. Please also send me the slog2info output. Hugh. On 2018-11-22, 8:58 AM, "John Scarrott" <community-noreply@qnx.com> wrote: I appreciate how difficult these things can be to debug without the hardware, thanks for your help! pidin irq looks like this: pid tid name 1 1 /procnto-smp-instr 1 2 /procnto-smp-instr 0 0x2 0 -P-N- @0xffff800000077f50:0 1 3 /procnto-smp-instr 1 4 /procnto-smp-instr 1 5 /procnto-smp-instr 1 6 /procnto-smp-instr 1 7 /procnto-smp-instr 1 8 /procnto-smp-instr 1 9 /procnto-smp-instr 1 10 /procnto-smp-instr 1 11 /procnto-smp-instr 1 12 /procnto-smp-instr 1 13 /procnto-smp-instr 1 14 /procnto-smp-instr 1 15 /procnto-smp-instr 1 16 /procnto-smp-instr 2 1 bin/slogger2 2 2 bin/slogger2 3 1 sbin/pci-server 3 2 sbin/pci-server 3 3 sbin/pci-server 4 1 sbin/pipe 4 2 sbin/pipe 4 3 sbin/pipe 5 1 bin/devc-pty 6 1 usr/sbin/qconn 6 2 usr/sbin/qconn 8 1 usr/sbin/inetd 4103 1 sbin/io-pkt-v6-hc 4103 2 sbin/io-pkt-v6-hc 1 0x100 0 T---- @0x1004fa9c0:0x821dcc0 2 0x102 0 T---- @0x1004fa7a0:0x821dcc0 3 0x101 0 T---- @0x1004fa820:0x821dcc0 4103 3 sbin/io-pkt-v6-hc 16393 1 sbin/devc-ser8250 4 0x4 0 T---- @0x804b540:0x805e088 5 0x3 0 T---- @0x804b540:0x805e0d8 16395 1 bin/sh 20492 1 bin/sh 24586 1 sbin/io-usb-otg 24586 2 sbin/io-usb-otg 24586 3 sbin/io-usb-otg 24586 4 sbin/io-usb-otg 24586 5 sbin/io-usb-otg 24586 6 sbin/io-usb-otg 24586 7 sbin/io-usb-otg 24586 8 sbin/io-usb-otg 24586 9 sbin/io-usb-otg 24586 10 sbin/io-usb-otg 24586 11 sbin/io-usb-otg 6 0x103 0 TP--- =PULSE 0x40000014:24 0x1:0 24586 12 sbin/io-usb-otg 24586 13 sbin/io-usb-otg 24589 1 sbin/devb-umass 24589 2 sbin/devb-umass 24589 3 sbin/devb-umass 24589 4 sbin/devb-umass 24589 5 sbin/devb-umass 24589 6 sbin/devb-umass 24589 7 sbin/devb-umass 24589 8 sbin/devb-umass 24589 9 sbin/devb-umass 24589 10 sbin/devb-umass 24590 1 sbin/devc-con-hid 24591 1 bin/sh 24592 1 bin/sh 24593 1 bin/slog2info 36882 1 bin/pidin _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119301 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 22 Nov 2018 15:12:54 GMT http://community.qnx.com/sf/go/post119302 Hugh Brown 2018-11-22T15:12:54Z post119301: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119301 I appreciate how difficult these things can be to debug without the hardware, thanks for your help! pidin irq looks like this: pid tid name 1 1 /procnto-smp-instr 1 2 /procnto-smp-instr 0 0x2 0 -P-N- @0xffff800000077f50:0 1 3 /procnto-smp-instr 1 4 /procnto-smp-instr 1 5 /procnto-smp-instr 1 6 /procnto-smp-instr 1 7 /procnto-smp-instr 1 8 /procnto-smp-instr 1 9 /procnto-smp-instr 1 10 /procnto-smp-instr 1 11 /procnto-smp-instr 1 12 /procnto-smp-instr 1 13 /procnto-smp-instr 1 14 /procnto-smp-instr 1 15 /procnto-smp-instr 1 16 /procnto-smp-instr 2 1 bin/slogger2 2 2 bin/slogger2 3 1 sbin/pci-server 3 2 sbin/pci-server 3 3 sbin/pci-server 4 1 sbin/pipe 4 2 sbin/pipe 4 3 sbin/pipe 5 1 bin/devc-pty 6 1 usr/sbin/qconn 6 2 usr/sbin/qconn 8 1 usr/sbin/inetd 4103 1 sbin/io-pkt-v6-hc 4103 2 sbin/io-pkt-v6-hc 1 0x100 0 T---- @0x1004fa9c0:0x821dcc0 2 0x102 0 T---- @0x1004fa7a0:0x821dcc0 3 0x101 0 T---- @0x1004fa820:0x821dcc0 4103 3 sbin/io-pkt-v6-hc 16393 1 sbin/devc-ser8250 4 0x4 0 T---- @0x804b540:0x805e088 5 0x3 0 T---- @0x804b540:0x805e0d8 16395 1 bin/sh 20492 1 bin/sh 24586 1 sbin/io-usb-otg 24586 2 sbin/io-usb-otg 24586 3 sbin/io-usb-otg 24586 4 sbin/io-usb-otg 24586 5 sbin/io-usb-otg 24586 6 sbin/io-usb-otg 24586 7 sbin/io-usb-otg 24586 8 sbin/io-usb-otg 24586 9 sbin/io-usb-otg 24586 10 sbin/io-usb-otg 24586 11 sbin/io-usb-otg 6 0x103 0 TP--- =PULSE 0x40000014:24 0x1:0 24586 12 sbin/io-usb-otg 24586 13 sbin/io-usb-otg 24589 1 sbin/devb-umass 24589 2 sbin/devb-umass 24589 3 sbin/devb-umass 24589 4 sbin/devb-umass 24589 5 sbin/devb-umass 24589 6 sbin/devb-umass 24589 7 sbin/devb-umass 24589 8 sbin/devb-umass 24589 9 sbin/devb-umass 24589 10 sbin/devb-umass 24590 1 sbin/devc-con-hid 24591 1 bin/sh 24592 1 bin/sh 24593 1 bin/slog2info 36882 1 bin/pidin Thu, 22 Nov 2018 13:58:27 GMT http://community.qnx.com/sf/go/post119301 John Scarrott 2018-11-22T13:58:27Z post119300: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119300 When you run "pidin irq", what interrupt does the USB driver get? The e1000 driver definitely works, so this has something to do with interrupts, but not having the hardware here makes it difficult to debug. On 2018-11-22, 5:06 AM, "John Scarrott" <community-noreply@qnx.com> wrote: I had another look at the USB and it actually works. I was just starting the driver incorrectly. So as far I can tell it is only the ethernet that doesn't work. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119298 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 22 Nov 2018 13:41:16 GMT http://community.qnx.com/sf/go/post119300 Hugh Brown 2018-11-22T13:41:16Z post119298: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119298 I had another look at the USB and it actually works. I was just starting the driver incorrectly. So as far I can tell it is only the ethernet that doesn't work. Thu, 22 Nov 2018 10:06:46 GMT http://community.qnx.com/sf/go/post119298 John Scarrott 2018-11-22T10:06:46Z post119297: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119297 I manually enabled USB support to check as you asked. It isn't super important for my use case so I hadn't added it to the startup script yet. I won't have a chance to look at it until next week now, but thanks for all the help so far. I wonder if a BIOS setting has broken it? Thu, 15 Nov 2018 21:47:21 GMT http://community.qnx.com/sf/go/post119297 John Scarrott 2018-11-15T21:47:21Z post119296: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119296 Feedback from our PCI developer: Nothing jumps out in the log file. I have an Apollo Lake here (not the Kontron board specifically) and I can assure you MSI's work. A couple of notes from the build file. They should limit the bus scan when they start the pci-server to avoid polluting the logs (--bus-scan-limit=6 should be sufficient) Also, they said USB isn't working but in the build file it isn't even started and there is no record in the sloginfo of USB running. Can you reconfirm. They must be doing something wrong On 2018-11-15, 9:08 AM, "John Scarrott" <community-noreply@qnx.com> wrote: I'm also attaching my startup script in case that helps _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119295 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 15 Nov 2018 18:01:53 GMT http://community.qnx.com/sf/go/post119296 Hugh Brown 2018-11-15T18:01:53Z post119295: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119295 I'm also attaching my startup script in case that helps Thu, 15 Nov 2018 17:08:47 GMT http://community.qnx.com/sf/go/post119295 John Scarrott 2018-11-15T17:08:47Z post119294: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119294 I am no using startup-x86 but it has made no difference. I've attached the slog output. Thanks Thu, 15 Nov 2018 17:06:11 GMT http://community.qnx.com/sf/go/post119294 John Scarrott 2018-11-15T17:06:11Z post119293: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119293 Please can you try the following: They should be able to use startup-x86 with UEFI for the boot. Tell them to make sure slogger2 -s2048k in the build file before pci-server and then have them send the output of slog2info Also have them make sure that where the PCI components are located (usually /lib/dll/pci/) is in their LD_LIBRARY_PATH. Maybe it can't find the modules it needs. slog2info will tell us. On 2018-11-15, 5:46 AM, "John Scarrott" <community-noreply@qnx.com> wrote: I don't know if this makes any difference but I am using startup-UEFI as the BIOS does not seem to support legacy booting. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119292 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 15 Nov 2018 16:56:47 GMT http://community.qnx.com/sf/go/post119293 Hugh Brown 2018-11-15T16:56:47Z post119292: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119292 I don't know if this makes any difference but I am using startup-UEFI as the BIOS does not seem to support legacy booting. Thu, 15 Nov 2018 13:46:48 GMT http://community.qnx.com/sf/go/post119292 John Scarrott 2018-11-15T13:46:48Z post119291: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119291 pid tid name 1 1 /procnto-smp-instr 1 2 /procnto-smp-instr 0 0x2 0 -P-N- @0xffff800000077f50:0 1 3 /procnto-smp-instr 1 4 /procnto-smp-instr 1 5 /procnto-smp-instr 1 6 /procnto-smp-instr 1 7 /procnto-smp-instr 1 8 /procnto-smp-instr 1 9 /procnto-smp-instr 1 10 /procnto-smp-instr 1 11 /procnto-smp-instr 1 12 /procnto-smp-instr 1 13 /procnto-smp-instr 1 14 /procnto-smp-instr 1 15 /procnto-smp-instr 1 16 /procnto-smp-instr 2 1 bin/slogger2 2 2 bin/slogger2 3 1 sbin/pci-server 3 2 sbin/pci-server 3 3 sbin/pci-server 4 1 sbin/pipe 4 2 sbin/pipe 4 3 sbin/pipe 5 1 bin/devc-pty 6 1 usr/sbin/qconn 6 2 usr/sbin/qconn 8 1 usr/sbin/inetd 4103 1 sbin/devb-sdmmc 1 0x27 0 TP--- =PULSE 0x40000006:21 0x3:0 4103 2 sbin/devb-sdmmc 4103 3 sbin/devb-sdmmc 4103 4 sbin/devb-sdmmc 4103 5 sbin/devb-sdmmc 4103 6 sbin/devb-sdmmc 4103 7 sbin/devb-sdmmc 4103 8 sbin/devb-sdmmc 4103 9 sbin/devb-sdmmc 4103 10 sbin/devb-sdmmc 4105 1 sbin/io-pkt-v6-hc 4105 2 sbin/io-pkt-v6-hc 2 0x100 0 T---- @0x1004fa9c0:0x821dcc0 3 0x102 0 T---- @0x1004fa7a0:0x821dcc0 4 0x101 0 T---- @0x1004fa820:0x821dcc0 4105 3 sbin/io-pkt-v6-hc 32779 1 sbin/dhclient 36874 1 sbin/devc-ser8250 5 0x4 0 T---- @0x804b540:0x805e088 6 0x3 0 T---- @0x804b540:0x805e0d8 36877 1 bin/sh 40972 1 sbin/devc-con-hid 40974 1 bin/sh 40975 1 bin/sh 57360 1 bin/pidin Replugging doesn't help, and after you mentioned it USB does not work either. I've also tried multiple cables, switches, etc. PCI_MODULE_BLACKLIST=pci_cap-0x11.so io-pkt-v6-hc -de1000 verbose=3 Didn't seem to do anything much. Thu, 15 Nov 2018 09:55:26 GMT http://community.qnx.com/sf/go/post119291 John Scarrott 2018-11-15T09:55:26Z post119290: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119290 We need to find out if it is a problem with MSIX interrupts. Please can you post the output from "pidin irq"? Is USB working? Also, if you unplug the link and then re-plug it, does the link come up? You can also try the following: PCI_MODULE_BLACKLIST=pci_cap-0x11.so io-pkt-v6-hc -de1000 verbose=3 Ifconfig wm0 192.168.0.1 Does the link come up after doing this? On 2018-11-14, 9:08 AM, "John Scarrott" <community-noreply@qnx.com> wrote: Yeah there is no link UP event in slog after calling ifconfig up, all I get is this: Jan 01 00:00:30.099 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,1,4 [53257:2]: find_ecam_base(): found ecam base 0xe0000000 Jan 01 00:00:30.116 io_pkt_v6_hc.53257..1 slog* 0 *** found B5:D0:F0 Jan 01 00:00:30.117 iopkt.53257 main_buffer 0 wm0 Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Successful connection to PCI server on /dev/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Capability modules will be searched for in directory /lib/dll/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x10-8086157b.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257..1 slog 0 *** device is supported, attaching Jan 01 00:00:30.117 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 33, connect_entry: 806c1d8) reply_len = 8, OK [PCI_ERR_OK] Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x10.so, v2.0 loaded successfully Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x10-8086157b.so ... not found Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x11-8086157b.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257..1 slog 0 Unable to enable PCIe capabilities, Requested Operation, Condition Or Data Already Exists [PCI_ERR_EALREADY] Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x11.so, v2.0 loaded successfully Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820e088) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 3 disabled Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 4 disabled Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11 pci_dbg 0 DEBUG,1,3,4 [3:2]: rsrcdb_irq_resv(0, 0x0, 3, 80692c8->[0]=174,...) returned OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,2,4 [3:2]: hw_alloc_irq(B5:D0:F0, 0, 0x0, 3, 806d268) OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X vector table (5 vectors, sz=0x1000) at 81120000 mapped to 180095000 for server and pid 53257 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X capabilities enabled Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 0 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 1 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 2 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** mmap_device_memory @ 81100000 to 180097000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** 3 irqs Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 translate 0x17a4a2000, len 1000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 bmtrans is 0x0 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 i82544_pci_attach: rar entries 16 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X PBA (5 vectors, sz=0x1000) at 81122000 mapped to 180096000 for server and pid 53257 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 25, connect_entry: 806c1d8) reply_len = 100, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 24, connect_entry: 806c1d8) reply_len = 224, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 23, connect_entry: 806c1d8) reply_len = 20, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 22, connect_entry: 806c1d8) reply_len = 0, OK [PCI_ERR_OK] Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 INTEL PRO/1000 Gigabit (Copper) Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Vendor .............. 0x8086 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Device .............. 0x157b Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Revision ............ 0x0 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Memory base ......... 0x81100000 Thanks for the quick response! It really means a lot. I'm not sure how I would go about debugging the interupts, any advice? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119288 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 14 Nov 2018 17:50:30 GMT http://community.qnx.com/sf/go/post119290 Hugh Brown 2018-11-14T17:50:30Z post119289: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119289 I'll ask out PCI developer to take a look at this. On 2018-11-14, 9:08 AM, "John Scarrott" <community-noreply@qnx.com> wrote: Yeah there is no link UP event in slog after calling ifconfig up, all I get is this: Jan 01 00:00:30.099 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,1,4 [53257:2]: find_ecam_base(): found ecam base 0xe0000000 Jan 01 00:00:30.116 io_pkt_v6_hc.53257..1 slog* 0 *** found B5:D0:F0 Jan 01 00:00:30.117 iopkt.53257 main_buffer 0 wm0 Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Successful connection to PCI server on /dev/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Capability modules will be searched for in directory /lib/dll/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x10-8086157b.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257..1 slog 0 *** device is supported, attaching Jan 01 00:00:30.117 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 33, connect_entry: 806c1d8) reply_len = 8, OK [PCI_ERR_OK] Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x10.so, v2.0 loaded successfully Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x10-8086157b.so ... not found Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x11-8086157b.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257..1 slog 0 Unable to enable PCIe capabilities, Requested Operation, Condition Or Data Already Exists [PCI_ERR_EALREADY] Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x11.so, v2.0 loaded successfully Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820e088) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 3 disabled Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 4 disabled Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11 pci_dbg 0 DEBUG,1,3,4 [3:2]: rsrcdb_irq_resv(0, 0x0, 3, 80692c8->[0]=174,...) returned OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,2,4 [3:2]: hw_alloc_irq(B5:D0:F0, 0, 0x0, 3, 806d268) OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X vector table (5 vectors, sz=0x1000) at 81120000 mapped to 180095000 for server and pid 53257 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X capabilities enabled Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 0 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 1 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 2 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** mmap_device_memory @ 81100000 to 180097000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** 3 irqs Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 translate 0x17a4a2000, len 1000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 bmtrans is 0x0 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 i82544_pci_attach: rar entries 16 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X PBA (5 vectors, sz=0x1000) at 81122000 mapped to 180096000 for server and pid 53257 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 25, connect_entry: 806c1d8) reply_len = 100, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 24, connect_entry: 806c1d8) reply_len = 224, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 23, connect_entry: 806c1d8) reply_len = 20, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 22, connect_entry: 806c1d8) reply_len = 0, OK [PCI_ERR_OK] Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 INTEL PRO/1000 Gigabit (Copper) Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Vendor .............. 0x8086 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Device .............. 0x157b Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Revision ............ 0x0 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Memory base ......... 0x81100000 Thanks for the quick response! It really means a lot. I'm not sure how I would go about debugging the interupts, any advice? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119288 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 14 Nov 2018 17:14:44 GMT http://community.qnx.com/sf/go/post119289 Hugh Brown 2018-11-14T17:14:44Z post119288: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119288 Yeah there is no link UP event in slog after calling ifconfig up, all I get is this: Jan 01 00:00:30.099 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,1,4 [53257:2]: find_ecam_base(): found ecam base 0xe0000000 Jan 01 00:00:30.116 io_pkt_v6_hc.53257..1 slog* 0 *** found B5:D0:F0 Jan 01 00:00:30.117 iopkt.53257 main_buffer 0 wm0 Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Successful connection to PCI server on /dev/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Capability modules will be searched for in directory /lib/dll/pci Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x10-8086157b.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.117 io_pkt_v6_hc.53257..1 slog 0 *** device is supported, attaching Jan 01 00:00:30.117 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 33, connect_entry: 806c1d8) reply_len = 8, OK [PCI_ERR_OK] Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x10.so, v2.0 loaded successfully Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x10-8086157b.so ... not found Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x10.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Attempt module load of /lib/dll/pci/pci_cap-0x11-8086157b.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Re-attempt module load of /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.118 io_pkt_v6_hc.53257..1 slog 0 Unable to enable PCIe capabilities, Requested Operation, Condition Or Data Already Exists [PCI_ERR_EALREADY] Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: Module is compatible with Library ver 2.0 Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,0,4 [53257:2]: Module /lib/dll/pci/pci_cap-0x11.so, v2.0 loaded successfully Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 io_pkt_v6_hc.53257 pci_log 0 INFO ,1,1,4 [53257:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11.so Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820e088) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 3 disabled Jan 01 00:00:30.119 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 4 disabled Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Check for /lib/dll/pci/pci_cap-0x11-8086157b.so ... not found Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,1,4 [3:2]: B5:D0:F0 - Found /lib/dll/pci/pci_cap-0x11 pci_dbg 0 DEBUG,1,3,4 [3:2]: rsrcdb_irq_resv(0, 0x0, 3, 80692c8->[0]=174,...) returned OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3 pci_log 0 INFO ,1,2,4 [3:2]: hw_alloc_irq(B5:D0:F0, 0, 0x0, 3, 806d268) OK [PCI_ERR_OK] Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.119 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X vector table (5 vectors, sz=0x1000) at 81120000 mapped to 180095000 for server and pid 53257 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..0 pci_dbg 0 DEBUG,1,3,4 [53257:2]: _cap_msix_get_nirq(820ad78) returns 5 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X capabilities enabled Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 0 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 1 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 MSI-X interrupt entry 2 unmasked Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** mmap_device_memory @ 81100000 to 180097000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 *** 3 irqs Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 translate 0x17a4a2000, len 1000 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 bmtrans is 0x0 Jan 01 00:00:30.120 io_pkt_v6_hc.53257..1 slog 0 i82544_pci_attach: rar entries 16 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: _cap_msix_get_nirq(806b490) returns 5 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: MSI-X PBA (5 vectors, sz=0x1000) at 81122000 mapped to 180096000 for server and pid 53257 Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 25, connect_entry: 806c1d8) reply_len = 100, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 24, connect_entry: 806c1d8) reply_len = 224, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 23, connect_entry: 806c1d8) reply_len = 20, OK [PCI_ERR_OK] Jan 01 00:00:30.120 pci_server.3..0 pci_dbg 0 DEBUG,1,3,4 [3:2]: msg_handler(msg type: 22, connect_entry: 806c1d8) reply_len = 0, OK [PCI_ERR_OK] Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 INTEL PRO/1000 Gigabit (Copper) Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Vendor .............. 0x8086 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Device .............. 0x157b Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Revision ............ 0x0 Jan 01 00:00:30.437 io_pkt_v6_hc.53257..1 slog 0 Memory base ......... 0x81100000 Thanks for the quick response! It really means a lot. I'm not sure how I would go about debugging the interupts, any advice? Wed, 14 Nov 2018 17:08:39 GMT http://community.qnx.com/sf/go/post119288 John Scarrott 2018-11-14T17:08:39Z post119287: Re: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119287 Yes, it appears that interrupts are not working. You can start the driver as follows - "io-pkt-v6-hc -de1000 verbose=3", which will output some debug information to slog2info. If you don't see a "Link up" message in slog2info after running ifconfig wm0 169.254.0.1, then it definitely looks like an interrupt problem on your board. On 2018-11-14, 8:02 AM, "John Scarrott" <community-noreply@qnx.com> wrote: I'm trying to get QNX 7 working on a kontron mAL10 E3930. I've got the board booting fine but cannot get the network up and running. I'm running: io-pkt-v6-hc -d e1000 -p tcpip -vvvv & if_up -r 10 -p wm0 ifconfig wm0 169.254.0.1 up which works fine with no errors reported and nothing obviously wrong in the slog. But ifconfig shows this: wm0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:e0:4b:65:64:ad media: Ethernet autoselect (none) status: no carrier inet 169.254.0.1 netmask 0xffff0000 broadcast 169.254.255.255 inet6 fe80::2e0:4bff:fe65:64ad%wm0 prefixlen 64 scopeid 0x11 The link light works and I can disable it fine using the ifconfig wm0 down command. nicinfo reports: wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 00E04B 6564AD Current Physical Node ID ................... 00E04B 6564AD Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x100 Hardware Interrupt ......................... 0x101 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0x81100000 - 0x8111ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 142 Bytes Received OK .......................... 31449 Broadcast Packets Received OK .............. 142 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 514 Invalid Symbol Errors ...................... 0 My only thoughts are perhaps the interrupts aren't setup correctly but I am unsure how to configure them. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post119286 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 14 Nov 2018 16:55:39 GMT http://community.qnx.com/sf/go/post119287 Hugh Brown 2018-11-14T16:55:39Z post119286: Intel I210 NIC only shows link status DOWN http://community.qnx.com/sf/go/post119286 I'm trying to get QNX 7 working on a kontron mAL10 E3930. I've got the board booting fine but cannot get the network up and running. I'm running: io-pkt-v6-hc -d e1000 -p tcpip -vvvv & if_up -r 10 -p wm0 ifconfig wm0 169.254.0.1 up which works fine with no errors reported and nothing obviously wrong in the slog. But ifconfig shows this: wm0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:e0:4b:65:64:ad media: Ethernet autoselect (none) status: no carrier inet 169.254.0.1 netmask 0xffff0000 broadcast 169.254.255.255 inet6 fe80::2e0:4bff:fe65:64ad%wm0 prefixlen 64 scopeid 0x11 The link light works and I can disable it fine using the ifconfig wm0 down command. nicinfo reports: wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 00E04B 6564AD Current Physical Node ID ................... 00E04B 6564AD Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x100 Hardware Interrupt ......................... 0x101 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0x81100000 - 0x8111ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 142 Bytes Received OK .......................... 31449 Broadcast Packets Received OK .............. 142 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 514 Invalid Symbol Errors ...................... 0 My only thoughts are perhaps the interrupts aren't setup correctly but I am unsure how to configure them. Wed, 14 Nov 2018 16:02:08 GMT http://community.qnx.com/sf/go/post119286 John Scarrott 2018-11-14T16:02:08Z post119275: Re: how to use devnp-rum.so ? http://community.qnx.com/sf/go/post119275 update I disabled security of my router, then configure /etc/wpa_supplicant.conf network={ ssid="MYWIFI" key_mgmt=NONE } RT2501/RT2573 USB dongle is able to DHCP wifi address, and ping router 192.168.31.1 However, never able to get WPA-PSK working. network={ ssid="MYWIFI" key_mgmt=WPA-PSK psk="1234567890" } Please advise, thanks in advance. Mike Mon, 05 Nov 2018 06:05:43 GMT http://community.qnx.com/sf/go/post119275 mike scott(deleted) 2018-11-05T06:05:43Z post119274: how to use devnp-rum.so ? http://community.qnx.com/sf/go/post119274 Hi QNX, beaglebone black, QNX 6.5SP1, bsp-nto650-ti-beaglebone-sp1-trunk-201209071340, devnp-rum.so, 148f:2573 Ralink Technology, Corp. RT2501/RT2573 Wireless Adapter I am working on this usb wifi dongle, RT2501/RT2573 Wireless Adapter, with beaglebone black on QNX 6.5SP1. # io-pkt-v4-hc -drum # ifconfig rum0 up # wpa_supplicant -B -i rum0 -c /etc/wpa_supplicant.conf # dhcp.client -i rum0 # if_up -r 15 rum0 if_up: retries exhausted Process 131097 (if_up) exited status=7. # ifconfig rum0 rum0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ssid MYWIFI nwkey 65536:"","","","" powersave off bssid 28:6c:07:9a:c1:8d chan 4 address: 78:44:76:7a:8c:af media: IEEE802.11 autoselect (OFDM36 mode 11g) status: active inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255 # cat /etc/wpa_supplicant.conf network={ ssid="MYWIFI" psk="1234567890" } I also tried to setup a fixed network address, # io-pkt-v4-hc -drum # ifconfig rum0 up # wpa_supplicant -B -i rum0 -c /etc/wpa_supplicant.conf # ifconfig rum0 192.168.31.201 # ifconfig rum0 rum0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ssid MYWIFI nwkey 65536:"","","","" powersave off bssid 28:6c:07:9a:c1:8d chan 4 address: 78:44:76:7a:8c:af media: IEEE802.11 autoselect (OFDM36 mode 11g) status: active inet 192.168.31.201 netmask 0xffffff00 broadcast 192.168.31.255 # ping 192.168.31.1 PING 192.168.31.1 (192.168.31.1): 56 data bytes ping: sendto: Host is down ping: sendto: Host is down ping: sendto: Host is down The router is working fine, my lapton/smart phone are surfing INTERNET now. Please advise, thanks in advance Mike Sat, 03 Nov 2018 12:14:34 GMT http://community.qnx.com/sf/go/post119274 mike scott(deleted) 2018-11-03T12:14:34Z post119273: please help build the example packages http://community.qnx.com/sf/go/post119273 Hi QNX, beaglebone black, QNX 6.5SP1, bsp-nto650-ti-beaglebone-sp1-trunk-201209071340 I am working on usb-wifi drivers, and collected example packages from the previous post in this forum, http://community.qnx.com/sf/discussion/do/listPosts/projects.networking/discussion.drivers.topc22270 such as run.src.tar.gz, rum_ural.tar.gz in the post attachment. However, the package cannot be compiled, apparently missing libraries, such as /sys/kernel.h Please advise, thanks in advance. Mike Thu, 01 Nov 2018 12:26:10 GMT http://community.qnx.com/sf/go/post119273 mike scott(deleted) 2018-11-01T12:26:10Z post119224: Re: Ethernet AVB driver for I.MX6 Solo X http://community.qnx.com/sf/go/post119224 The i,MX6 SoloX driver includes support for Ethernet AVB consisting of ioctl()s that set traffic shaping, multiple Tx and Rx queues and a low latency receive thread for AVB traffic. It also supports PTP. Regards, Nick Fri, 19 Oct 2018 16:42:30 GMT http://community.qnx.com/sf/go/post119224 Nick Reilly 2018-10-19T16:42:30Z post119223: Ethernet AVB driver for I.MX6 Solo X http://community.qnx.com/sf/go/post119223 Hi, In founday27 I see a link to download i.MX6SoloX BSP, wanted to know if this BSP includes corresponding Ethernet AVB driver i.e. driver for Traffic shaping, Multi queues which is part of SoloX SoC? I am assuming PTP driver will any way available in BSP. Please do let me know if not i.MX6Solo X then any other BSP of any other h/w includes these driver? Thanks a lot in advance for help. Regards Ashok Fri, 19 Oct 2018 15:52:06 GMT http://community.qnx.com/sf/go/post119223 Ashok Kumar 2018-10-19T15:52:06Z post119046: [QNX 6.6] Ethernet over USB using RNDIS on x86 http://community.qnx.com/sf/go/post119046 I have an x86 target which is acting as a USB host to communicate with a USB device using RNDIS. I'm using QNX 6.6. I'm attempting the following: io-pkt-v4-hc verbose -p tcpip io-usb-dcd -dehci -duhci -dohci waitfor /dev/io-usb-dcd/io-usb 5 mount -Tio-pkt -o mac=00D056F2B512,usbdnet_mac=00D056F2B513,protocol=rndis devnp-usbdnet.so if_up -p rndis0 ifconfig rndis0 10.100.8.113 what I see is to the following: # sloginfo -c Time Sev Major Minor Args Jul 16 15:51:19 5 14 0 tcpip starting Jul 16 15:51:19 3 14 0 Using pseudo random generator. See "random" option Jul 16 15:51:19 5 14 0 initializing IPsec... done Jul 16 15:51:19 5 14 0 IPsec: Initialized Security Association Processing. Jul 16 15:51:19 2 10 0 usbmgr_connection_create() Failed to attach to root device for cfg (status = 19) Jul 16 15:51:19 2 10 0 usbdif_init() Couln't create configuration connection to the usb stack... using default descriptors ( error = 19 ) Jul 16 15:51:19 5 14 0 rndis0 Questions: 1) The usbdnet_mac is incorrect. How do I display the MAC address of the local USB port in QNX? I will try to see if I can give it when I attached the device to Windows. 2) Does anyone have a suggestion of usbmgr_connection_create() is failing to attach? The device I wish to communicate with is showing up on USB 5. See attached file for info from the "usb -v" command. Fri, 10 Aug 2018 21:42:30 GMT http://community.qnx.com/sf/go/post119046 Tim Spargo(deleted) 2018-08-10T21:42:30Z post118973: Re: Intel i218 and QNX 6.3.2 http://community.qnx.com/sf/go/post118973 Th i218 isn't supported by the 6.3.2 driver, so if you want support for it, you will have to contact your sales representative. On 2018-07-19, 9:55 AM, "Ramiro Vota" <community-noreply@qnx.com> wrote: Hi everybody, I recently bought an Advantech's ARK 1550. This machine has an intel i210 and an intel i218 ethernet chipset. I´ve installed QNX 6.3.2 and I downloaded the driver devn-e1000.so from the post http://community.qnx.com/sf/go/projects.networking/discussion.drivers.topc27366 This driver works OK for the i210, but I can´t make it work for the i218. Is there any update of the devn-e1000.so that works with the i218? The device PCI-ID is DID=0x155A. Thanks in advance. Ramiro _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118971 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 19 Jul 2018 14:39:41 GMT http://community.qnx.com/sf/go/post118973 Hugh Brown 2018-07-19T14:39:41Z post118971: Intel i218 and QNX 6.3.2 http://community.qnx.com/sf/go/post118971 Hi everybody, I recently bought an Advantech's ARK 1550. This machine has an intel i210 and an intel i218 ethernet chipset. I´ve installed QNX 6.3.2 and I downloaded the driver devn-e1000.so from the post http://community.qnx.com/sf/go/projects.networking/discussion.drivers.topc27366 This driver works OK for the i210, but I can´t make it work for the i218. Is there any update of the devn-e1000.so that works with the i218? The device PCI-ID is DID=0x155A. Thanks in advance. Ramiro Thu, 19 Jul 2018 14:12:32 GMT http://community.qnx.com/sf/go/post118971 Ramiro Vota(deleted) 2018-07-19T14:12:32Z post118959: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118959 The enumeration files are out of date, so you will have to do the following: Edit the /etc/rc.d/rc.local file and add the following slay -f io-pkt-v4 io-pkt-v4-hc io-pkt-v6-hc dhcp.client sleep 2 io-pkt-v4-hc -de1000 if_up -p wm0 dhcp.client -iwm0 You can modify the above to suite your needs. Hugh. On 2018-07-16, 8:20 AM, "Santosh Patil" <community-noreply@qnx.com> wrote: It's full fledged OS, installed. From QNX 6.5 NTO ISO. I just replaced your provided files. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118958 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 16 Jul 2018 12:44:42 GMT http://community.qnx.com/sf/go/post118959 Hugh Brown 2018-07-16T12:44:42Z post118958: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118958 It's full fledged OS, installed. From QNX 6.5 NTO ISO. I just replaced your provided files. Mon, 16 Jul 2018 12:38:14 GMT http://community.qnx.com/sf/go/post118958 Santosh Patil 2018-07-16T12:38:14Z post118957: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118957 Please post your build file, so that I can see what you are doing. Thanks, Hugh. On 2018-07-16, 6:59 AM, "Santosh Patil" <community-noreply@qnx.com> wrote: Hello Hugh, Thanks a lot for your help. They did actually work. Just one thing, they could not start on start-up, I had to run the script manually to get divers up and running. Which I can put into some boot-up script. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118956 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 16 Jul 2018 12:35:41 GMT http://community.qnx.com/sf/go/post118957 Hugh Brown 2018-07-16T12:35:41Z post118956: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118956 Hello Hugh, Thanks a lot for your help. They did actually work. Just one thing, they could not start on start-up, I had to run the script manually to get divers up and running. Which I can put into some boot-up script. Mon, 16 Jul 2018 11:17:06 GMT http://community.qnx.com/sf/go/post118956 Santosh Patil 2018-07-16T11:17:06Z post118922: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118922 I have attached the latest io-pkt-v4-hc. On 2018-07-09, 8:26 AM, "Santosh Patil" <community-noreply@qnx.com> wrote: Hello, I'm trying devnp-e1000.so. I copied that particular file into home folder and ran /sbin/io-pkt-v4-hc mount -T io-pkt /home/devnp-e1000.so after running this I can see interfaces in nicinfo and also in gui-network-interface. But soon after running this command I'm getting following error and /sbin/io-pkt-v4-hc process gets halted "ldd:FATAL: Unresolved symbol "stk_context_callback_2" called from devnp-e1000.so" AM I missing to load another library which is required ? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118919 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 09 Jul 2018 13:35:11 GMT http://community.qnx.com/sf/go/post118922 Hugh Brown 2018-07-09T13:35:11Z post118920: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118920 You will need the latest io-pkt-v4-hc to resolve this symbol. I'll have to see where you can find it and get back to you. On 2018-07-09, 8:26 AM, "Santosh Patil" <community-noreply@qnx.com> wrote: Hello, I'm trying devnp-e1000.so. I copied that particular file into home folder and ran /sbin/io-pkt-v4-hc mount -T io-pkt /home/devnp-e1000.so after running this I can see interfaces in nicinfo and also in gui-network-interface. But soon after running this command I'm getting following error and /sbin/io-pkt-v4-hc process gets halted "ldd:FATAL: Unresolved symbol "stk_context_callback_2" called from devnp-e1000.so" AM I missing to load another library which is required ? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118919 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 09 Jul 2018 12:47:57 GMT http://community.qnx.com/sf/go/post118920 Hugh Brown 2018-07-09T12:47:57Z post118919: Re: Intel I210 and I217 Gigabit ethernet support in devnp-e1000.so (qnx 6.5)? http://community.qnx.com/sf/go/post118919 Hello, I'm trying devnp-e1000.so. I copied that particular file into home folder and ran /sbin/io-pkt-v4-hc mount -T io-pkt /home/devnp-e1000.so after running this I can see interfaces in nicinfo and also in gui-network-interface. But soon after running this command I'm getting following error and /sbin/io-pkt-v4-hc process gets halted "ldd:FATAL: Unresolved symbol "stk_context_callback_2" called from devnp-e1000.so" AM I missing to load another library which is required ? Mon, 09 Jul 2018 12:44:19 GMT http://community.qnx.com/sf/go/post118919 Santosh Patil 2018-07-09T12:44:19Z post118766: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118766 Good news that you have it working! The QNX7 PCI server is a completely new driver and bears no resemblance to the QNX6 PCI server. The PCI source code that you have should be the latest, as there haven't been any changes to the code for a long time. Hugh. On 2018-04-12, 12:36 PM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: > Why are you running QNX6.6 with qemu-system-x86_64? Surely you should be running qemu-system-i386? No. We run everything (Linux, qnx 6.6, qnx 7.0) under qemu-system-x86_64. It works for the same reason that you can boot qnx6.6 (or any 32-bit only OS) on 64-bit 86 hardware. Since we have been deploying qnx 6.6 to 64-bit hardware since the beginning, it’d actually be inappropriate to test it under qemu-system-i386… > Even if we're seeing two types of failures here (TBD) Two types of failures, with identical symptoms. Seen-on-hardware failure is related to a supermicro server (as opposed to advantech), and is presumably a BSP issue. qemu failure is described below. > If you can't triage or fix the issue directly, can you post the snippet of code that is failing inside of pci-bios-v2? You didn’t respond to this, but we found pci-bios-v2 source code publicly available at: http://community.qnx.com/sf/wiki/do/viewPage/projects.bsp/wiki/X86Bios The problem occurs in bios-v2.c , in function bios_avail_irq(). With no support for PIIX3, get_LPC_vendorId() returns 0xffff, the originally assigned IRQ gets blown away on line 442, and everything errors out at the switch’s default block. We added rudimentary support for PIIX3 and preserving the IRQ assignment, and now see networking running fine under qemu for qnx6.6, whether in bios or apic mode. I don’t believe your users have access to QNX7’s pci-server source code (do we?) - I’d be interested in your assessment of why things work under QNX7…. Was there a fix in QNX7 that could get back ported into QNX6/pci-bios-v2? Regards, Brian _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118765 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 12 Apr 2018 17:03:54 GMT http://community.qnx.com/sf/go/post118766 Hugh Brown 2018-04-12T17:03:54Z post118765: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118765 > Why are you running QNX6.6 with qemu-system-x86_64? Surely you should be running qemu-system-i386? No. We run everything (Linux, qnx 6.6, qnx 7.0) under qemu-system-x86_64. It works for the same reason that you can boot qnx6.6 (or any 32-bit only OS) on 64-bit 86 hardware. Since we have been deploying qnx 6.6 to 64-bit hardware since the beginning, it’d actually be inappropriate to test it under qemu-system-i386… > Even if we're seeing two types of failures here (TBD) Two types of failures, with identical symptoms. Seen-on-hardware failure is related to a supermicro server (as opposed to advantech), and is presumably a BSP issue. qemu failure is described below. > If you can't triage or fix the issue directly, can you post the snippet of code that is failing inside of pci-bios-v2? You didn’t respond to this, but we found pci-bios-v2 source code publicly available at: http://community.qnx.com/sf/wiki/do/viewPage/projects.bsp/wiki/X86Bios The problem occurs in bios-v2.c , in function bios_avail_irq(). With no support for PIIX3, get_LPC_vendorId() returns 0xffff, the originally assigned IRQ gets blown away on line 442, and everything errors out at the switch’s default block. We added rudimentary support for PIIX3 and preserving the IRQ assignment, and now see networking running fine under qemu for qnx6.6, whether in bios or apic mode. I don’t believe your users have access to QNX7’s pci-server source code (do we?) - I’d be interested in your assessment of why things work under QNX7…. Was there a fix in QNX7 that could get back ported into QNX6/pci-bios-v2? Regards, Brian Thu, 12 Apr 2018 16:55:16 GMT http://community.qnx.com/sf/go/post118765 Brian Carnes(deleted) 2018-04-12T16:55:16Z post118764: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118764 Why are you running QNX6.6 with qemu-system-x86_64? Surely you should be running qemu-system-i386? On 2018-04-11, 10:29 AM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: sloginfo w/ "pci-bios* -vvv" output attached as files to this post. The significant part seems to be: Apr 10 22:05:25 6 17 0 get_an_irq Requested MSI[X] vectors 0 Apr 10 22:05:25 2 17 0 get_an_irq: No irq2 Apr 10 22:05:25 2 10 0 pci_attach_device failed We see an identical failure when running w/ "pci-bios[-v2] -M -vvv", to disable MSI and MSI-X completely, so the problem appears to be in the fallback, non-MSI interrupt handling logic. From earlier, note that /sbin/pci output shows it is sitting on IRQ11. It's been a long time since I've had to think about such things, but this sounds like something in the IRQ2 -> IRQ8-IRQ15 slave PIC logic. You should be able to reproduce this all under qemu-system-x86_64 over there. Things work under QNX6.6/startup-bios, QNX7, and Linux. They do not work under QNX6.6/startup-apic. Thanks. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118759 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 12 Apr 2018 15:34:14 GMT http://community.qnx.com/sf/go/post118764 Hugh Brown 2018-04-12T15:34:14Z post118761: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118761 The original bug report was from real hardware. In brief, it was "USB works reliably under only one of startup-bios/startup-apic under qnx 6.6, networking only works under the other..." These rapid iterations running your experimental drivers, etc. are happening on our regression testing testbed, which is using qemu for "VM-in-the-loop continuous integration/testing", pending enough HW being present for true "HW-in-the-loop CI/testing..." Since we have a nice clean failure example in qemu, I hadn't pushed your requests back to the team w/ hardware. Based on your question, I've now pinged the original team that reported this on hardware. Even if we're seeing two types of failures here (TBD), it'd be nice to fix the original issue and the running-under-qemu issue. If you can't triage or fix the issue directly, can you post the snippet of code that is failing inside of pci-bios-v2? We have folks with qemu-innards-expertise over here. I'll let you know when I hear back from the on-hardware original bug folks. Thanks Wed, 11 Apr 2018 17:29:03 GMT http://community.qnx.com/sf/go/post118761 Brian Carnes(deleted) 2018-04-11T17:29:03Z post118760: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118760 Just to be clear, this problem only occurs running under qemu? Thanks, Hugh. On 2018-04-11, 10:29 AM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: sloginfo w/ "pci-bios* -vvv" output attached as files to this post. The significant part seems to be: Apr 10 22:05:25 6 17 0 get_an_irq Requested MSI[X] vectors 0 Apr 10 22:05:25 2 17 0 get_an_irq: No irq2 Apr 10 22:05:25 2 10 0 pci_attach_device failed We see an identical failure when running w/ "pci-bios[-v2] -M -vvv", to disable MSI and MSI-X completely, so the problem appears to be in the fallback, non-MSI interrupt handling logic. From earlier, note that /sbin/pci output shows it is sitting on IRQ11. It's been a long time since I've had to think about such things, but this sounds like something in the IRQ2 -> IRQ8-IRQ15 slave PIC logic. You should be able to reproduce this all under qemu-system-x86_64 over there. Things work under QNX6.6/startup-bios, QNX7, and Linux. They do not work under QNX6.6/startup-apic. Thanks. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118759 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 11 Apr 2018 16:00:06 GMT http://community.qnx.com/sf/go/post118760 Hugh Brown 2018-04-11T16:00:06Z post118759: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118759 sloginfo w/ "pci-bios* -vvv" output attached as files to this post. The significant part seems to be: Apr 10 22:05:25 6 17 0 get_an_irq Requested MSI[X] vectors 0 Apr 10 22:05:25 2 17 0 get_an_irq: No irq2 Apr 10 22:05:25 2 10 0 pci_attach_device failed We see an identical failure when running w/ "pci-bios[-v2] -M -vvv", to disable MSI and MSI-X completely, so the problem appears to be in the fallback, non-MSI interrupt handling logic. From earlier, note that /sbin/pci output shows it is sitting on IRQ11. It's been a long time since I've had to think about such things, but this sounds like something in the IRQ2 -> IRQ8-IRQ15 slave PIC logic. You should be able to reproduce this all under qemu-system-x86_64 over there. Things work under QNX6.6/startup-bios, QNX7, and Linux. They do not work under QNX6.6/startup-apic. Thanks. Wed, 11 Apr 2018 14:49:07 GMT http://community.qnx.com/sf/go/post118759 Brian Carnes(deleted) 2018-04-11T14:49:07Z post118758: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118758 Please can you rebuild your boot image and make sure that you have "slogger -s512k" and also "pci-bios -vvv". I need to get the full output from both the driver and the PCI server, as it appears that the pci_attach_device() is failing on MSI interrupts for some reason. Please post the complete sloginfo output. Thanks, Hugh. On 2018-04-09, 3:46 PM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: Your latest driver with extra logging, under pci-bios-v2: Time Sev Major Minor Args Apr 09 19:35:32 5 14 0 tcpip starting Apr 09 19:35:32 3 14 0 Using pseudo random generator. See "random" option Apr 09 19:35:32 5 14 0 initializing IPsec... done Apr 09 19:35:32 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 19:35:32 5 14 0 wm0 Apr 09 19:35:32 6 10 0 i82544_pci_attach1 Apr 09 19:35:32 6 10 0 i82544_pci_attach2 - media_rate -1 - duplex -1 Apr 09 19:35:32 6 10 0 i82544_pci_attach3 - busdevice 0x100e Apr 09 19:35:32 6 10 0 i82544_pci_attach4 Apr 09 19:35:32 6 10 0 i82544_pci_attach5 Apr 09 19:35:32 6 10 0 i82544_pci_attach6 Apr 09 19:35:32 2 10 0 pci_attach_device failed Apr 09 19:35:32 2 14 0 Unable to init /tmp/devnp-e1000.so: No such device And running this same driver under pci-bios (v1) still gives an Abort that is not present in the original 6.6.0 devnp-e1000.so, but shows that it makes it past pci_attach_device(): Time Sev Major Minor Args Apr 09 19:35:42 6 17 0 intrinfo size = 320 entry size = 64 count = 5Apr 09 19:35:42 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 19:35:42 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 19:35:42 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 19:35:42 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 19:35:42 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 19:35:42 5 14 0 tcpip starting Apr 09 19:35:42 3 14 0 Using pseudo random generator. See "random" option Apr 09 19:35:42 5 14 0 initializing IPsec... done Apr 09 19:35:42 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 19:35:42 5 14 0 wm0 Apr 09 19:35:42 6 10 0 i82544_pci_attach1 Apr 09 19:35:42 6 10 0 i82544_pci_attach2 - media_rate -1 - duplex -1 Apr 09 19:35:42 6 10 0 i82544_pci_attach3 - busdevice 0x100e Apr 09 19:35:42 6 10 0 i82544_pci_attach4 Apr 09 19:35:42 6 10 0 i82544_pci_attach5 Apr 09 19:35:42 6 10 0 i82544_pci_attach6 Apr 09 19:35:42 6 10 0 i82544_pci_attach7 Apr 09 19:35:42 6 10 0 i82544_pci_attach8 Apr 09 19:35:42 6 10 0 e1000_set_mac_type Apr 09 19:35:42 6 10 0 e1000_init_mac_ops_generic ... Apr 09 19:35:42 1 14 0 nic_delay() called from proc0! _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118755 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 10 Apr 2018 13:46:52 GMT http://community.qnx.com/sf/go/post118758 Hugh Brown 2018-04-10T13:46:52Z post118757: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118757 No, we don't have a tigon3 driver update for 6.3. Hugh. On 2018-04-10, 3:23 AM, "mario sangalli" <community-noreply@qnx.com> wrote: Thanks, Hugh! It works for me. just another help: the second NIC card is a 1Gb broadcom BCM5785 chip, DID=0x1699, I've tested the tigon3 driver under 6.5 version and works fine, whilst the 6.3.2 tigon3 driver do not recognize the chip and when started stop to works after few ping, Have You a tigon3 update driver for 6.3 ? Any help is appreciated, Thanks again Mario _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118756 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 10 Apr 2018 11:55:59 GMT http://community.qnx.com/sf/go/post118757 Hugh Brown 2018-04-10T11:55:59Z post118756: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118756 Thanks, Hugh! It works for me. just another help: the second NIC card is a 1Gb broadcom BCM5785 chip, DID=0x1699, I've tested the tigon3 driver under 6.5 version and works fine, whilst the 6.3.2 tigon3 driver do not recognize the chip and when started stop to works after few ping, Have You a tigon3 update driver for 6.3 ? Any help is appreciated, Thanks again Mario Tue, 10 Apr 2018 07:42:27 GMT http://community.qnx.com/sf/go/post118756 mario sangalli 2018-04-10T07:42:27Z post118755: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118755 Your latest driver with extra logging, under pci-bios-v2: Time Sev Major Minor Args Apr 09 19:35:32 5 14 0 tcpip starting Apr 09 19:35:32 3 14 0 Using pseudo random generator. See "random" option Apr 09 19:35:32 5 14 0 initializing IPsec... done Apr 09 19:35:32 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 19:35:32 5 14 0 wm0 Apr 09 19:35:32 6 10 0 i82544_pci_attach1 Apr 09 19:35:32 6 10 0 i82544_pci_attach2 - media_rate -1 - duplex -1 Apr 09 19:35:32 6 10 0 i82544_pci_attach3 - busdevice 0x100e Apr 09 19:35:32 6 10 0 i82544_pci_attach4 Apr 09 19:35:32 6 10 0 i82544_pci_attach5 Apr 09 19:35:32 6 10 0 i82544_pci_attach6 Apr 09 19:35:32 2 10 0 pci_attach_device failed Apr 09 19:35:32 2 14 0 Unable to init /tmp/devnp-e1000.so: No such device And running this same driver under pci-bios (v1) still gives an Abort that is not present in the original 6.6.0 devnp-e1000.so, but shows that it makes it past pci_attach_device(): Time Sev Major Minor Args Apr 09 19:35:42 6 17 0 intrinfo size = 320 entry size = 64 count = 5Apr 09 19:35:42 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 19:35:42 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 19:35:42 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 19:35:42 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 19:35:42 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 6 17 0 scan_device exiting Apr 09 19:35:42 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 19:35:42 5 14 0 tcpip starting Apr 09 19:35:42 3 14 0 Using pseudo random generator. See "random" option Apr 09 19:35:42 5 14 0 initializing IPsec... done Apr 09 19:35:42 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 19:35:42 5 14 0 wm0 Apr 09 19:35:42 6 10 0 i82544_pci_attach1 Apr 09 19:35:42 6 10 0 i82544_pci_attach2 - media_rate -1 - duplex -1 Apr 09 19:35:42 6 10 0 i82544_pci_attach3 - busdevice 0x100e Apr 09 19:35:42 6 10 0 i82544_pci_attach4 Apr 09 19:35:42 6 10 0 i82544_pci_attach5 Apr 09 19:35:42 6 10 0 i82544_pci_attach6 Apr 09 19:35:42 6 10 0 i82544_pci_attach7 Apr 09 19:35:42 6 10 0 i82544_pci_attach8 Apr 09 19:35:42 6 10 0 e1000_set_mac_type Apr 09 19:35:42 6 10 0 e1000_init_mac_ops_generic ... Apr 09 19:35:42 1 14 0 nic_delay() called from proc0! Mon, 09 Apr 2018 20:05:20 GMT http://community.qnx.com/sf/go/post118755 Brian Carnes(deleted) 2018-04-09T20:05:20Z post118754: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118754 I have inserted some debug code into the driver to try and determine what is happening, so please run the attached driver as before and post the sloginfo output. Thanks, Hugh. On 2018-04-09, 2:30 PM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: Thank you, Hugh. With your new driver, I get the same results: # io-pkt-v4-hc -d /tmp/devnp-e1000.so verbose=4 -p tcpip stacksize=8192 yields Unable to init /tmp/devnp-e1000.so: No such device And slogger output of: Time Sev Major Minor Args Apr 09 17:52:52 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:52:52 6 17 0 base = 0x80010000 num = 6 cascade = 0x7fffffff intr 48 Apr 09 17:52:52 6 17 0 base = 0x8001ffff num = 1 cascade = 0x7fffffff intr 47 Apr 09 17:52:52 6 17 0 base = 0x80000000 num = 3 cascade = 0x7fffffff intr 2 Apr 09 17:52:52 6 17 0 base = 0x00000000 num = 24 cascade = 0x7fffffff intr 54 Apr 09 17:52:52 6 17 0 base = 0x00000100 num = 177 cascade = 0x7fffffff intr 78 Apr 09 17:52:52 6 17 0 find_host_bridge - bridge_count 0 - num_pci_bridges 0 Apr 09 17:52:52 2 17 0 scan_windows: Alloc failed 0xfd000008 - Size 0x1000000 Apr 09 17:52:52 5 14 0 tcpip starting Apr 09 17:52:52 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:52:52 5 14 0 initializing IPsec... done Apr 09 17:52:52 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:52:52 5 14 0 wm0 Apr 09 17:52:52 2 10 0 pci_attach_device failed Apr 09 17:52:52 2 14 0 Unable to init devnp-e1000.so: No such device running the same experiment under pci-bios (v1), with the original devnp-e1000.so, where things work, produces logging info of: Time Sev Major Minor Args Apr 09 17:52:48 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:52:48 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 17:52:48 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 17:52:48 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 17:52:48 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 17:52:48 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 17:52:49 5 14 0 tcpip starting Apr 09 17:52:49 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:52:49 5 14 0 initializing IPsec... done Apr 09 17:52:49 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:52:49 5 14 0 wm0 Apr 09 17:52:49 6 10 0 e1000_set_mac_type Apr 09 17:52:49 6 10 0 e1000_init_mac_ops_generic ... Interestingly, doing a quick run of your experimental driver from within pci-bios (v1) mode, it no longer works as the shipping devnp-e1000.so does, but instead yields an abort: # io-pkt-v4-hc -d /tmp/devnp-e1000.so verbose=4 -p tcpip stacksize=8192 -v Abort # sloginfo Time Sev Major Minor Args Apr 09 17:53:09 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:53:09 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 17:53:09 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 17:53:09 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 17:53:09 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 17:53:09 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 17:53:09 5 14 0 tcpip starting Apr 09 17:53:09 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:53:09 5 14 0 initializing IPsec... done Apr 09 17:53:09 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:53:09 5 14 0 wm0 Apr 09 17:53:09 6 10 0 e1000_set_mac_type Apr 09 17:53:09 6 10 0 e1000_init_mac_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_phy_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_nvm_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_function_pointers_82540 Apr 09 17:53:09 6 10 0 e1000_init_mac_params_82540 Apr 09 17:53:09 6 10 0 e1000_init_nvm_params_82540 Apr 09 17:53:09 6 10 0 e1000_get_phy_id Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_m88 Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_mdic Apr 09 17:53:09 6 10 0 e1000_null_phy_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_m88 Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_mdic Apr 09 17:53:09 6 10 0 e1000_null_phy_generic Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 i82544_pci_attach: rar entries 15 Apr 09 17:53:09 6 10 0 e1000_get_bus_info_pci_generic Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_reset_hw_82540 Apr 09 17:53:09 6 10 0 Masking off all interrupts Apr 09 17:53:09 1 14 0 nic_delay() called from proc0! # _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118753 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 09 Apr 2018 19:03:49 GMT http://community.qnx.com/sf/go/post118754 Hugh Brown 2018-04-09T19:03:49Z post118753: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118753 Thank you, Hugh. With your new driver, I get the same results: # io-pkt-v4-hc -d /tmp/devnp-e1000.so verbose=4 -p tcpip stacksize=8192 yields Unable to init /tmp/devnp-e1000.so: No such device And slogger output of: Time Sev Major Minor Args Apr 09 17:52:52 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:52:52 6 17 0 base = 0x80010000 num = 6 cascade = 0x7fffffff intr 48 Apr 09 17:52:52 6 17 0 base = 0x8001ffff num = 1 cascade = 0x7fffffff intr 47 Apr 09 17:52:52 6 17 0 base = 0x80000000 num = 3 cascade = 0x7fffffff intr 2 Apr 09 17:52:52 6 17 0 base = 0x00000000 num = 24 cascade = 0x7fffffff intr 54 Apr 09 17:52:52 6 17 0 base = 0x00000100 num = 177 cascade = 0x7fffffff intr 78 Apr 09 17:52:52 6 17 0 find_host_bridge - bridge_count 0 - num_pci_bridges 0 Apr 09 17:52:52 2 17 0 scan_windows: Alloc failed 0xfd000008 - Size 0x1000000 Apr 09 17:52:52 5 14 0 tcpip starting Apr 09 17:52:52 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:52:52 5 14 0 initializing IPsec... done Apr 09 17:52:52 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:52:52 5 14 0 wm0 Apr 09 17:52:52 2 10 0 pci_attach_device failed Apr 09 17:52:52 2 14 0 Unable to init devnp-e1000.so: No such device running the same experiment under pci-bios (v1), with the original devnp-e1000.so, where things work, produces logging info of: Time Sev Major Minor Args Apr 09 17:52:48 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:52:48 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 17:52:48 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 17:52:48 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 17:52:48 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 17:52:48 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 6 17 0 scan_device exiting Apr 09 17:52:48 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 17:52:49 5 14 0 tcpip starting Apr 09 17:52:49 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:52:49 5 14 0 initializing IPsec... done Apr 09 17:52:49 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:52:49 5 14 0 wm0 Apr 09 17:52:49 6 10 0 e1000_set_mac_type Apr 09 17:52:49 6 10 0 e1000_init_mac_ops_generic ... Interestingly, doing a quick run of your experimental driver from within pci-bios (v1) mode, it no longer works as the shipping devnp-e1000.so does, but instead yields an abort: # io-pkt-v4-hc -d /tmp/devnp-e1000.so verbose=4 -p tcpip stacksize=8192 -v Abort # sloginfo Time Sev Major Minor Args Apr 09 17:53:09 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Apr 09 17:53:09 6 17 0 base = 0x80010000 num = 6 cascade = 07fffffff intr 48 Apr 09 17:53:09 6 17 0 base = 0x8001ffff num = 1 cascade = 07fffffff intr 47 Apr 09 17:53:09 6 17 0 base = 0x80000000 num = 3 cascade = 07fffffff intr 2 Apr 09 17:53:09 6 17 0 base = 0x00000000 num = 24 cascade = 07fffffff intr 54 Apr 09 17:53:09 6 17 0 base = 0x00000100 num = 177 cascade = 07fffffff intr 78 Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 6 17 0 scan_device exiting Apr 09 17:53:09 2 17 0 scan_windows: Alloc failed fd000008 - Size 1000000 Apr 09 17:53:09 5 14 0 tcpip starting Apr 09 17:53:09 3 14 0 Using pseudo random generator. See "random" option Apr 09 17:53:09 5 14 0 initializing IPsec... done Apr 09 17:53:09 5 14 0 IPsec: Initialized Security Association Processing. Apr 09 17:53:09 5 14 0 wm0 Apr 09 17:53:09 6 10 0 e1000_set_mac_type Apr 09 17:53:09 6 10 0 e1000_init_mac_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_phy_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_nvm_ops_generic Apr 09 17:53:09 6 10 0 e1000_init_function_pointers_82540 Apr 09 17:53:09 6 10 0 e1000_init_mac_params_82540 Apr 09 17:53:09 6 10 0 e1000_init_nvm_params_82540 Apr 09 17:53:09 6 10 0 e1000_get_phy_id Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_m88 Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_mdic Apr 09 17:53:09 6 10 0 e1000_null_phy_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_m88 Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_read_phy_reg_mdic Apr 09 17:53:09 6 10 0 e1000_null_phy_generic Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 i82544_pci_attach: rar entries 15 Apr 09 17:53:09 6 10 0 e1000_get_bus_info_pci_generic Apr 09 17:53:09 6 10 0 e1000_null_ops_generic Apr 09 17:53:09 6 10 0 e1000_reset_hw_82540 Apr 09 17:53:09 6 10 0 Masking off all interrupts Apr 09 17:53:09 1 14 0 nic_delay() called from proc0! # Mon, 09 Apr 2018 18:49:51 GMT http://community.qnx.com/sf/go/post118753 Brian Carnes(deleted) 2018-04-09T18:49:51Z post118752: Re: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118752 I have attached the latest 6.6.0 devnp-e1000.so driver to this email for you to try. If it still doesn't work under apic mode, will you please do the following and post the sloginfo output: Make sure that slogger is started with "slogger -s512k" slay io-pkt-v4-hc io-pkt-v4-hc -de1000 verbose=4 -p tcpip stacksize=8192 sloginfo > file post the output "file" Thanks, Hugh. On 2018-04-09, 1:22 PM, "Brian Carnes(deleted)" <community-noreply@qnx.com> wrote: We've been able to isolate this to a minimal test case that reproduces under qemu. io-pkt-v4-hc and/or e1000.so is able to find the networking hardware while under startup-bios (w/ pci-bios), but not under startup-apic (w/ pci-bios-v2 named as pci-bios in the build file as instructed[1]) is there an update to e1000/io-pkt-v4-hc we should obtain to work under apic mode? Are there any workarounds to enable using this network adapter w/ apic? QNX 6.6, startup-apic mode: # io-pkt-v4-hc -d e1000 -v Unable to init devnp-e1000.so: No such device QNX 6.6, startup-bios mode: # io-pkt-v4-hc -d e1000 -v # ifconfig -a ... heathy output... QNX7 (x86_64) works for us as well. It is only qnx 6.6, under startup-apic that it misbehaves. In both working and "no such device" cases under QNX6.6, /usr/sbin/pci reports: # pci PCI version = 2.10 ... Class = Network (Ethernet) Vendor ID = 8086h, Intel Corporation Device ID = 100eh, 82540EM Gigabit Ethernet Controller PCI index = 0h BAR - 0 [Mem] = febc0000h enabled BAR - 1 [I/O] = c000h enabled PCI Expansion ROM = feb80000h disabled PCI Int Pin = INT A Interrupt line = 11 CPU Interrupt = bh While we have customized startup-* binaries elsewhere, for the above tests we reverted to using vanilla QNX shipped startup-{bios,apic} to rule out our unrelated changes. As a third, rule-violating experiment, to give you another datapoint: We tried running our system with the forbidden combination of startup-apic, and pci-bios-v1. Under this frankenstein combo, the network driver does load correctly. So to the extent that one can separate apic mode from pci-bios-v2 mode, the problem seems to lie within pci-bios-v2. Additional version info: # uname -a QNX localhost 6.6.0 2014/02/22-18:29:34EST x86pc x86 ## under startup-bios, w/ pci-bios (v1) # strings /proc/boot/pci-bios | tail DESCRIPTION=BIOS PCI server DATE=2014/02/21-13:06:13-EST STATE=stable HOST=sdp-builder-08 USER=builder VERSION=6.6.0 TAGID=276 ## under startup-apic, w/ pci-bios-v2 # strings /proc/boot/pci-bios | tail DESCRIPTION=BIOS PCI server DATE=2014/02/21-15:40:42-EST STATE=stable HOST=sdp-builder-08 USER=builder VERSION=6.6.0 TAGID=277 [1] http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/#com.qnx.doc.neutrino.utilities/topic/p/pci-bios.html _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118751 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 09 Apr 2018 17:53:05 GMT http://community.qnx.com/sf/go/post118752 Hugh Brown 2018-04-09T17:53:05Z post118751: [QNX 6.6] intel e1000 network driver works under startup-bios, not under startup-apic http://community.qnx.com/sf/go/post118751 We've been able to isolate this to a minimal test case that reproduces under qemu. io-pkt-v4-hc and/or e1000.so is able to find the networking hardware while under startup-bios (w/ pci-bios), but not under startup-apic (w/ pci-bios-v2 named as pci-bios in the build file as instructed[1]) is there an update to e1000/io-pkt-v4-hc we should obtain to work under apic mode? Are there any workarounds to enable using this network adapter w/ apic? QNX 6.6, startup-apic mode: # io-pkt-v4-hc -d e1000 -v Unable to init devnp-e1000.so: No such device QNX 6.6, startup-bios mode: # io-pkt-v4-hc -d e1000 -v # ifconfig -a ... heathy output... QNX7 (x86_64) works for us as well. It is only qnx 6.6, under startup-apic that it misbehaves. In both working and "no such device" cases under QNX6.6, /usr/sbin/pci reports: # pci PCI version = 2.10 ... Class = Network (Ethernet) Vendor ID = 8086h, Intel Corporation Device ID = 100eh, 82540EM Gigabit Ethernet Controller PCI index = 0h BAR - 0 [Mem] = febc0000h enabled BAR - 1 [I/O] = c000h enabled PCI Expansion ROM = feb80000h disabled PCI Int Pin = INT A Interrupt line = 11 CPU Interrupt = bh While we have customized startup-* binaries elsewhere, for the above tests we reverted to using vanilla QNX shipped startup-{bios,apic} to rule out our unrelated changes. As a third, rule-violating experiment, to give you another datapoint: We tried running our system with the forbidden combination of startup-apic, and pci-bios-v1. Under this frankenstein combo, the network driver does load correctly. So to the extent that one can separate apic mode from pci-bios-v2 mode, the problem seems to lie within pci-bios-v2. Additional version info: # uname -a QNX localhost 6.6.0 2014/02/22-18:29:34EST x86pc x86 ## under startup-bios, w/ pci-bios (v1) # strings /proc/boot/pci-bios | tail DESCRIPTION=BIOS PCI server DATE=2014/02/21-13:06:13-EST STATE=stable HOST=sdp-builder-08 USER=builder VERSION=6.6.0 TAGID=276 ## under startup-apic, w/ pci-bios-v2 # strings /proc/boot/pci-bios | tail DESCRIPTION=BIOS PCI server DATE=2014/02/21-15:40:42-EST STATE=stable HOST=sdp-builder-08 USER=builder VERSION=6.6.0 TAGID=277 [1] http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/#com.qnx.doc.neutrino.utilities/topic/p/pci-bios.html Mon, 09 Apr 2018 17:41:34 GMT http://community.qnx.com/sf/go/post118751 Brian Carnes(deleted) 2018-04-09T17:41:34Z post118745: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118745 Thank you very much, I will test as soon as possible! Thu, 05 Apr 2018 16:06:32 GMT http://community.qnx.com/sf/go/post118745 mario sangalli 2018-04-05T16:06:32Z post118744: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118744 Please try the attached driver and let me know if it works. If not, I'll have to find a 6.3.0 machine and compile the latest driver for you. Thanks, Hugh. On 2018-04-05, 11:05 AM, "mario sangalli" <community-noreply@qnx.com> wrote: I'm sorry, it is 0x1539, the 0x1699 is for a second ethernet: a Broadcom BCM54610 chip, in 6.5 i've used the tigon3 driver: if it is available also for 6.3, it should be an alternative to intel chip, thanks, mario _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118743 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 05 Apr 2018 15:31:17 GMT http://community.qnx.com/sf/go/post118744 Hugh Brown 2018-04-05T15:31:17Z post118743: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118743 I'm sorry, it is 0x1539, the 0x1699 is for a second ethernet: a Broadcom BCM54610 chip, in 6.5 i've used the tigon3 driver: if it is available also for 6.3, it should be an alternative to intel chip, thanks, mario Thu, 05 Apr 2018 15:25:06 GMT http://community.qnx.com/sf/go/post118743 mario sangalli 2018-04-05T15:25:06Z post118742: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118742 Are you sure that this is the correct device ID? I have just downloaded the latest Linux driver source from Intel and there is no device ID 0x1699. All i210 adapters are in the 0x15xx range. On 2018-04-05, 8:59 AM, "mario sangalli" <community-noreply@qnx.com> wrote: Thanks for support: it's an Intel I210/I211 chip, with DID=x1699 Mario _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118741 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 05 Apr 2018 13:27:36 GMT http://community.qnx.com/sf/go/post118742 Hugh Brown 2018-04-05T13:27:36Z post118741: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118741 Thanks for support: it's an Intel I210/I211 chip, with DID=x1699 Mario Thu, 05 Apr 2018 13:18:51 GMT http://community.qnx.com/sf/go/post118741 mario sangalli 2018-04-05T13:18:51Z post118740: Re: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118740 What is the device PCI-ID that you want to support? We have an old e1000 driver for 6.3.0, but it doesn't support the latest Intel device IDs. On 2018-04-05, 6:27 AM, "mario sangalli" <community-noreply@qnx.com> wrote: Dear All, I've to update a 6.3 x86 application with a Intel Gbit chip... I'm search for a devn-e1000.so driver or similar if available, I will appreciate any support, best regards mario _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118739 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 05 Apr 2018 12:14:04 GMT http://community.qnx.com/sf/go/post118740 Hugh Brown 2018-04-05T12:14:04Z post118739: Intel e1000 Gbit driver for QNX6.3.x http://community.qnx.com/sf/go/post118739 Dear All, I've to update a 6.3 x86 application with a Intel Gbit chip... I'm search for a devn-e1000.so driver or similar if available, I will appreciate any support, best regards mario Thu, 05 Apr 2018 10:47:17 GMT http://community.qnx.com/sf/go/post118739 mario sangalli 2018-04-05T10:47:17Z post118584: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118584 That 'comserv' is our server for serial interfaces, and the used interface PCI card shares IRQs with other PCI components (of course). After using startup_apic and pci-bios-v2 it looks better at the first glance. But I have to test some scenarios. We used pci-bios only. Thanks You for the hint! With best regards, Michael Kurt Thu, 15 Feb 2018 16:28:21 GMT http://community.qnx.com/sf/go/post118584 Michael Kurt 2018-02-15T16:28:21Z post118571: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118571 OK, it seems to be that as soon as you start the second RTL device, it is sharing an interrupt with "comserv", whatever that is. Have you tried using startup_apic and pci-bios-v2 in your system at all? This might solve the problem of shared interrupts. On 2018-02-13, 8:57 AM, "Michael Kurt" <community-noreply@qnx.com> wrote: Please find attached the results I collected ('captured' always ca. 10 minutes after booting). The problem occurs when at least two interfaces are started. I started the drivers / interfaces via 'mount -t io-pkt -o pci=...,vid=...', since this is the way our start scripts work, but the results are the same as by starting via 'io-pkt-v4-hc -drtl pci=x' There are six scenarios: results_1 - 1 RTL interface started (devn-rtl, pci=0 - en0) results_2 - 2 RTL interfaces started (devn-rtl, pci=0-1 - en0-1) results_3 - 3 RTL interfaces started (devn-rtl, pci=0-2 - en0-2) results_4 - 4 RTL interfaces started (devn-rtl, pci=0-3 - en0-3) results_5 - 4 RTL interfaces and 1 Intel interfaces started (devn-rtl, pci=0 - en0-3, devnp-e1000, pci=0 - wm0) results_6 - 4 RTL interfaces and 2 Intel interfaces started (devn-rtl, pci=0 - en0-3, devnp-e1000, pci=0/1 - wm0-1) It looks to me like an issue with shared interrupts. Unfortunately the BIOS of the used PCs does not allow to change PCI IRQ assignment. With best regards, Michael. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118568 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 13 Feb 2018 15:14:10 GMT http://community.qnx.com/sf/go/post118571 Hugh Brown 2018-02-13T15:14:10Z post118568: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118568 Please find attached the results I collected ('captured' always ca. 10 minutes after booting). The problem occurs when at least two interfaces are started. I started the drivers / interfaces via 'mount -t io-pkt -o pci=...,vid=...', since this is the way our start scripts work, but the results are the same as by starting via 'io-pkt-v4-hc -drtl pci=x' There are six scenarios: results_1 - 1 RTL interface started (devn-rtl, pci=0 - en0) results_2 - 2 RTL interfaces started (devn-rtl, pci=0-1 - en0-1) results_3 - 3 RTL interfaces started (devn-rtl, pci=0-2 - en0-2) results_4 - 4 RTL interfaces started (devn-rtl, pci=0-3 - en0-3) results_5 - 4 RTL interfaces and 1 Intel interfaces started (devn-rtl, pci=0 - en0-3, devnp-e1000, pci=0 - wm0) results_6 - 4 RTL interfaces and 2 Intel interfaces started (devn-rtl, pci=0 - en0-3, devnp-e1000, pci=0/1 - wm0-1) It looks to me like an issue with shared interrupts. Unfortunately the BIOS of the used PCs does not allow to change PCI IRQ assignment. With best regards, Michael. Tue, 13 Feb 2018 14:18:15 GMT http://community.qnx.com/sf/go/post118568 Michael Kurt 2018-02-13T14:18:15Z post118562: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118562 Does this problem only occur when you start the devn-rtl driver? Does it occur if you only start the devn-rtl driver on one interface? (io-pkt-v4-hc -drtl pci=0) Please will you supply the output from the following commands when all driver are started and io-pkt is in the running state: pidin -pio-pkt-v4-hc mem pci -vv pidin pidin arg pidin irq Thanks, Hugh. On 2018-02-12, 2:41 AM, "Michael Kurt" <community-noreply@qnx.com> wrote: If I may join this discussion, since I also wanted to start such a thread: We are facing the same issue under QNX 6.5.0 SP1 but for 'devn-rtl' in context with io-pkt-v4-hc (and maybe for io-pkt-v4 - tested only once). As hardware, we are using an Advantech UNO-4683 which is equipped with two Gigabit interfaces (Intel 82574L - devnp-e1000) and four 100MBit interfaces (RTL8139 - devn-rtl). The problem even exists if there are no Ethernet cables plugged in, at all, and only the drivers are started (with one Gigabit interface UP, one 100MBit interface UP, the rest is DOWN via ifconfig). One thread of io-pkt-v4-hc (directly after starting, I would tell) remains in the RUNNING state and consumes much CPU time (after some hours (>8h), 'top' reports 25% of CPU usage and running 'pidin ttimes' in a loop reports one more second of 'sutime' usage per second) When trying to 'ifconfig destroy' the interfaces before reboot, 'ifconfig' hangs (I think) on the interface that's io-pkt thread is RUNNING and so much time consuming. The same hanging occurs when only 'slay'ing io-pkt-v4-hc without destroying the interfaces. If there's a way for me to somehow contribute, I would highly appreciate, since this problem will bring us into big troubles when remote-maintaining our systems, since no safe reboot, if necessary, will be possible. With best regards, Michael Kurt. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118558 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 12 Feb 2018 13:33:13 GMT http://community.qnx.com/sf/go/post118562 Hugh Brown 2018-02-12T13:33:13Z post118558: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118558 If I may join this discussion, since I also wanted to start such a thread: We are facing the same issue under QNX 6.5.0 SP1 but for 'devn-rtl' in context with io-pkt-v4-hc (and maybe for io-pkt-v4 - tested only once). As hardware, we are using an Advantech UNO-4683 which is equipped with two Gigabit interfaces (Intel 82574L - devnp-e1000) and four 100MBit interfaces (RTL8139 - devn-rtl). The problem even exists if there are no Ethernet cables plugged in, at all, and only the drivers are started (with one Gigabit interface UP, one 100MBit interface UP, the rest is DOWN via ifconfig). One thread of io-pkt-v4-hc (directly after starting, I would tell) remains in the RUNNING state and consumes much CPU time (after some hours (>8h), 'top' reports 25% of CPU usage and running 'pidin ttimes' in a loop reports one more second of 'sutime' usage per second) When trying to 'ifconfig destroy' the interfaces before reboot, 'ifconfig' hangs (I think) on the interface that's io-pkt thread is RUNNING and so much time consuming. The same hanging occurs when only 'slay'ing io-pkt-v4-hc without destroying the interfaces. If there's a way for me to somehow contribute, I would highly appreciate, since this problem will bring us into big troubles when remote-maintaining our systems, since no safe reboot, if necessary, will be possible. With best regards, Michael Kurt. Mon, 12 Feb 2018 08:02:19 GMT http://community.qnx.com/sf/go/post118558 Michael Kurt 2018-02-12T08:02:19Z post118549: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118549 Please will you try the attached 6.5.0 driver? Thanks, Hugh. On 2018-02-01, 2:52 PM, "Mario Charest" <community-noreply@qnx.com> wrote: Ok thanks. I'll dig up 6.6 and see if that works better. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118514 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Fri, 09 Feb 2018 18:43:06 GMT http://community.qnx.com/sf/go/post118549 Hugh Brown 2018-02-09T18:43:06Z post118526: Re: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118526 Thanks for the reply and clarification. I think this significantly complicates my situation. I will need to rethink how I should approach my problem. Wed, 07 Feb 2018 20:12:23 GMT http://community.qnx.com/sf/go/post118526 Danny Yen 2018-02-07T20:12:23Z post118525: Re: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118525 Yes, 6.6.0 and later. In 6.5.0 SP1 then it is under /usr/include/drvr 6.4.1 is so old that much of what I described is not present. Wed, 07 Feb 2018 19:58:51 GMT http://community.qnx.com/sf/go/post118525 Nick Reilly 2018-02-07T19:58:51Z post118524: Re: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118524 Thank you for the replies. Is /usr/include/netdrvr/ something that was introduced in a later version of QNX? I'm looking at our version, which is stuck at 6.4.1, and I cannot find /usr/include/netdrvr. Wed, 07 Feb 2018 19:55:39 GMT http://community.qnx.com/sf/go/post118524 Danny Yen 2018-02-07T19:55:39Z post118523: Re: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118523 DELAY is defined in usr/include/io-pkt/machine/param.h #define DELAY(_x) (((_x) >= 1000) ? (delay)(((_x) / 1000) + 1) : nanospin_ns((_x) * 1000L)) There are several things to consider with delays of any sort in a driver: 1) QNX kernel timer tick resolution. By default the timer tick is 1ms so any form of delay/sleep call will have that resolution. nanospin_ns() can be used for shorter delays but suffers from calibration issues. 2) io-pkt is in general behaviour single threaded - there is a single "network stack context" that performs most operations. Performing a delay in one driver in the stack context will usually block traffic on other interfaces. 3) To overcome point 2 we have introduced nic_delay() in usr/include/netdrvr/nicsupport.h However note that in yielding stack context to permit traffic to flow on other interfaces, it also permits other operations to happen on your driver - including such things as removal e.g. on a USB interface. Care needs to be taken with locking. 4) Adding to the complexity, the stack context can migrate between posix threads so the locks need to be nic_mutex and not pthread_mutex - see usr/include/netdrvr/nic_mutex.h 5) io-pkt internal timers as used by nic_delay() tick at between 8ms and 50ms depending on traffic. If you have many calls to nic_delay() for a short delays then you can find this taking significant clock time. Moving the entire block of code across to a blockop solves this issue and makes the nic_delay() behave like a normal delay. Think very carefully about delays in drivers, they have historically been the cause of many issues! Wed, 07 Feb 2018 19:05:21 GMT http://community.qnx.com/sf/go/post118523 Nick Reilly 2018-02-07T19:05:21Z post118522: Re: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118522 The definition for DELAY() can be found in usr/include/io-pkt/machine/param.h and it cannot be used in an interrupt service routine. nanospin_ns() can be used in an interrupt service routine, but it isn't recommended. See the docs. On 2018-02-07, 12:49 PM, "Danny Yen(deleted)" <community-noreply@qnx.com> wrote: In the documentation, under "Differences between ported NetBSD drivers and native drivers", it says that there are two different "delay" functions: delay() and DELAY(), where DELAY() is reentrant. However, Neutrino's version of delay() is very different (millisecond instead of microsecond). That's why QNX had defined a DELAY() to support microsecond. Furthermore, ported drivers should define delay() to DELAY(). My question is two folds: 1. Where can I actually find DELAY()? I see delay() in unistd.h but I can't seem to find DELAY() anywhere. Momentics does not seem to recognize it. 2. Is Neutrino's implementation of DELAY() or delay() reentrant, i.e. safe to be used in an interrupt or process context? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118521 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 07 Feb 2018 19:00:20 GMT http://community.qnx.com/sf/go/post118522 Hugh Brown 2018-02-07T19:00:20Z post118521: Porting NetBSD Network Driver http://community.qnx.com/sf/go/post118521 In the documentation, under "Differences between ported NetBSD drivers and native drivers", it says that there are two different "delay" functions: delay() and DELAY(), where DELAY() is reentrant. However, Neutrino's version of delay() is very different (millisecond instead of microsecond). That's why QNX had defined a DELAY() to support microsecond. Furthermore, ported drivers should define delay() to DELAY(). My question is two folds: 1. Where can I actually find DELAY()? I see delay() in unistd.h but I can't seem to find DELAY() anywhere. Momentics does not seem to recognize it. 2. Is Neutrino's implementation of DELAY() or delay() reentrant, i.e. safe to be used in an interrupt or process context? Wed, 07 Feb 2018 18:10:03 GMT http://community.qnx.com/sf/go/post118521 Danny Yen 2018-02-07T18:10:03Z post118514: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118514 Ok thanks. I'll dig up 6.6 and see if that works better. Thu, 01 Feb 2018 20:12:53 GMT http://community.qnx.com/sf/go/post118514 Mario Charest 2018-02-01T20:12:53Z post118513: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118513 Mario, The 6.5.0 rtl8169 driver hasn't been updated for ages, so I'm going to have to back-port the 6.6.0 driver to 6.5.0, when I have a chance, so I won't have a driver for you soon. Hugh. On 2018-02-01, 11:51 AM, "Mario Charest" <community-noreply@qnx.com> wrote: Output of sloginfo with verbose=3 Feb 01 16:48:40 6 17 0 free_mem_start = 0x7a000000, free_mem_end = 0xfec00000 Feb 01 16:48:40 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Feb 01 16:48:40 6 17 0 base = 0x80010000 num = 6 cascade = 0x7fffffff intr 48 Feb 01 16:48:40 6 17 0 base = 0x8001ffff num = 1 cascade = 0x7fffffff intr 47 Feb 01 16:48:40 6 17 0 base = 0x80000000 num = 3 cascade = 0x7fffffff intr 2 Feb 01 16:48:40 6 17 0 base = 0x00000000 num = 87 cascade = 0x7fffffff intr 54 Feb 01 16:48:40 6 17 0 base = 0x00000100 num = 114 cascade = 0x7fffffff intr 141 Feb 01 16:48:40 5 17 0 Get routing failed 81 Feb 01 16:48:40 6 17 0 find_host_bridge - bridge_count 10 - num_pci_bridges 10 Feb 01 16:48:40 2 17 0 scan_windows: Alloc failed 0x80000008 - Size 0x10000000 Feb 01 16:48:40 5 14 0 tcpip starting Feb 01 16:48:40 3 14 0 Using pseudo random generator. See "random" option Feb 01 16:48:40 5 14 0 devnp-e1000.so did=0x10d3,pci=0,receive=2048,priority=200 Feb 01 16:48:40 5 14 0 wm0 Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 5 14 0 lsm-qnet.so bind=wm0,no_slog=1,periodic_ticks=100,tx_retries=50,max_tx_bufs=1000,sl ow_mode=500 Feb 01 16:48:40 7 15 0 qnet(L4): qnet_birth(): qnet_init() - calling Feb 01 16:48:41 2 19 0 devb-eide 1.00A (Mar 11 2014 13:54:51) ... eide stuff removed Feb 01 16:48:42 3 14 0 Using pseudo random generator. See "random" option Feb 01 16:48:42 5 14 0 devnp-rtl8169.so did=0x8168,pci=0,priority=200,verbose=3 Feb 01 16:48:42 5 14 0 rt0 Feb 01 16:48:42 5 10 0 tcrval 0x2f900d00 - version 0x100000 - mcfg 0xffffffff Feb 01 16:48:43 5 14 0 RealTek 8169 Gigabit Feb 01 16:48:43 5 14 0 LanIdx .............. 0 Feb 01 16:48:43 5 14 0 DevIdx .............. 0 Feb 01 16:48:43 5 14 0 Vendor .............. 0x10ec Feb 01 16:48:43 5 14 0 Device .............. 0x8168 Feb 01 16:48:43 5 14 0 Revision ............ 0x6 Feb 01 16:48:43 5 14 0 I/O port base ....... 0x3000 Feb 01 16:48:43 5 14 0 Memory base ......... 0x0 Feb 01 16:48:43 5 14 0 Interrupt ........... 0x10b Feb 01 16:48:43 5 14 0 MAC address ......... 0090e8 633a66 Feb 01 16:48:44 5 10 0 rtl_StartUp: RTL_RMS 1519 Feb 01 16:48:45 5 10 0 devnp-rtl8169: rtl_stop() called, disable = 1 Feb 01 16:48:45 5 10 0 rtl_StartUp: RTL_RMS 1519 Feb 01 16:48:47 cron: started Feb 01 16:48:47 1 8 0 phfont: init... Feb 01 16:48:47 1 8 0 phfont: initialized. Feb 01 16:48:47 1 8 0 phfont: '/dev/phfont[<32|64>]' server installed. Feb 01 16:48:49 6 10 0 Link Interrupt Feb 01 16:48:49 6 10 0 Phy Status 93 Feb 01 16:48:49 5 10 0 devnp-rtl8169: Link up (1000 BaseT Full Duplex) _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118510 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 01 Feb 2018 19:06:45 GMT http://community.qnx.com/sf/go/post118513 Hugh Brown 2018-02-01T19:06:45Z post118510: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118510 Output of sloginfo with verbose=3 Feb 01 16:48:40 6 17 0 free_mem_start = 0x7a000000, free_mem_end = 0xfec00000 Feb 01 16:48:40 6 17 0 intrinfo size = 320 entry size = 64 count = 5 Feb 01 16:48:40 6 17 0 base = 0x80010000 num = 6 cascade = 0x7fffffff intr 48 Feb 01 16:48:40 6 17 0 base = 0x8001ffff num = 1 cascade = 0x7fffffff intr 47 Feb 01 16:48:40 6 17 0 base = 0x80000000 num = 3 cascade = 0x7fffffff intr 2 Feb 01 16:48:40 6 17 0 base = 0x00000000 num = 87 cascade = 0x7fffffff intr 54 Feb 01 16:48:40 6 17 0 base = 0x00000100 num = 114 cascade = 0x7fffffff intr 141 Feb 01 16:48:40 5 17 0 Get routing failed 81 Feb 01 16:48:40 6 17 0 find_host_bridge - bridge_count 10 - num_pci_bridges 10 Feb 01 16:48:40 2 17 0 scan_windows: Alloc failed 0x80000008 - Size 0x10000000 Feb 01 16:48:40 5 14 0 tcpip starting Feb 01 16:48:40 3 14 0 Using pseudo random generator. See "random" option Feb 01 16:48:40 5 14 0 devnp-e1000.so did=0x10d3,pci=0,receive=2048,priority=200 Feb 01 16:48:40 5 14 0 wm0 Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 2 12 0 ps2 - Device Timeout (0x55) Feb 01 16:48:40 5 14 0 lsm-qnet.so bind=wm0,no_slog=1,periodic_ticks=100,tx_retries=50,max_tx_bufs=1000,sl ow_mode=500 Feb 01 16:48:40 7 15 0 qnet(L4): qnet_birth(): qnet_init() - calling Feb 01 16:48:41 2 19 0 devb-eide 1.00A (Mar 11 2014 13:54:51) ... eide stuff removed Feb 01 16:48:42 3 14 0 Using pseudo random generator. See "random" option Feb 01 16:48:42 5 14 0 devnp-rtl8169.so did=0x8168,pci=0,priority=200,verbose=3 Feb 01 16:48:42 5 14 0 rt0 Feb 01 16:48:42 5 10 0 tcrval 0x2f900d00 - version 0x100000 - mcfg 0xffffffff Feb 01 16:48:43 5 14 0 RealTek 8169 Gigabit Feb 01 16:48:43 5 14 0 LanIdx .............. 0 Feb 01 16:48:43 5 14 0 DevIdx .............. 0 Feb 01 16:48:43 5 14 0 Vendor .............. 0x10ec Feb 01 16:48:43 5 14 0 Device .............. 0x8168 Feb 01 16:48:43 5 14 0 Revision ............ 0x6 Feb 01 16:48:43 5 14 0 I/O port base ....... 0x3000 Feb 01 16:48:43 5 14 0 Memory base ......... 0x0 Feb 01 16:48:43 5 14 0 Interrupt ........... 0x10b Feb 01 16:48:43 5 14 0 MAC address ......... 0090e8 633a66 Feb 01 16:48:44 5 10 0 rtl_StartUp: RTL_RMS 1519 Feb 01 16:48:45 5 10 0 devnp-rtl8169: rtl_stop() called, disable = 1 Feb 01 16:48:45 5 10 0 rtl_StartUp: RTL_RMS 1519 Feb 01 16:48:47 cron: started Feb 01 16:48:47 1 8 0 phfont: init... Feb 01 16:48:47 1 8 0 phfont: initialized. Feb 01 16:48:47 1 8 0 phfont: '/dev/phfont[<32|64>]' server installed. Feb 01 16:48:49 6 10 0 Link Interrupt Feb 01 16:48:49 6 10 0 Phy Status 93 Feb 01 16:48:49 5 10 0 devnp-rtl8169: Link up (1000 BaseT Full Duplex) Thu, 01 Feb 2018 17:12:05 GMT http://community.qnx.com/sf/go/post118510 Mario Charest 2018-02-01T17:12:05Z post118508: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118508 The RUNNING issue is gone. However seems no transmission is occuring. In the output of nicinfo notice how Packet Transmitted stays at 1, even after many attemp at using ping. rt0: RealTek 8169 Gigabit Ethernet Controller Physical Node ID ........................... 0090E8 633A66 Current Physical Node ID ................... 0090E8 633A66 Current Operation Rate ..................... 1000.00 Mb/s full-duplex Active Interface Type ...................... MII Active PHY address ....................... 0 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x10b I/O Aperture ............................... 0x3000 - 0x30ff Memory Aperture ............................ 0x0 Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 1 Bytes Transmitted OK ....................... 588 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 85739 Bytes Received OK .......................... 12243098 Broadcast Packets Received OK .............. 85683 Multicast Packets Received OK .............. 56 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Transmits aborted (excessive collisions) ... 0 Transmit Underruns ......................... 1 No Carrier on Transmit ..................... 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 > Please try the attached driver. > > Thanks, Hugh. > > On 2018-01-31, 2:22 PM, "Mario Charest" <community-noreply@qnx.com> wrote: > > Hi, > > From latest BSP found on foundry: > > NAME=devnp-rtl8169.so > DESCRIPTION=Realtek 8169 ethernet driver > DATE=2016/10/18-16:37:13-CDT > STATE=stable > HOST=psp650-linux-1.bts.rim.net > USER=builder > VERSION=1592 > TAGID=PSP_networking_br650_be650SP1 > > This is my first attempt at using this driver on some new hardware. > > The issue is that the second thread of io-pkt-v4 will become RUNNING > forever. It does not if there is no cable connected (or an IP adress is > assigned). The issue seems somewhat random. After a reboot the problem shows > up after a few seconds. But 1 reboot every 10 it looks ok. > > When io-pkt-v4 is in this state, any command such as nicinfo or ifconfig > become reply blocked forever. > > Hugh, in case you are reading this. I could go through official support > channel if you prefer, but given your participation here I was hoping to get > rid of overhead ;-) > > > > > _______________________________________________ > > Networking Drivers > http://community.qnx.com/sf/go/post118505 > To cancel your subscription to this discussion, please e-mail drivers- > networking-unsubscribe@community.qnx.com > > Thu, 01 Feb 2018 16:49:28 GMT http://community.qnx.com/sf/go/post118508 Mario Charest 2018-02-01T16:49:28Z post118507: Re: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118507 Please try the attached driver. Thanks, Hugh. On 2018-01-31, 2:22 PM, "Mario Charest" <community-noreply@qnx.com> wrote: Hi, From latest BSP found on foundry: NAME=devnp-rtl8169.so DESCRIPTION=Realtek 8169 ethernet driver DATE=2016/10/18-16:37:13-CDT STATE=stable HOST=psp650-linux-1.bts.rim.net USER=builder VERSION=1592 TAGID=PSP_networking_br650_be650SP1 This is my first attempt at using this driver on some new hardware. The issue is that the second thread of io-pkt-v4 will become RUNNING forever. It does not if there is no cable connected (or an IP adress is assigned). The issue seems somewhat random. After a reboot the problem shows up after a few seconds. But 1 reboot every 10 it looks ok. When io-pkt-v4 is in this state, any command such as nicinfo or ifconfig become reply blocked forever. Hugh, in case you are reading this. I could go through official support channel if you prefer, but given your participation here I was hoping to get rid of overhead ;-) _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118505 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 01 Feb 2018 13:50:53 GMT http://community.qnx.com/sf/go/post118507 Hugh Brown 2018-02-01T13:50:53Z post118505: Issues with devnp-rtl8169.so SP1 http://community.qnx.com/sf/go/post118505 Hi, From latest BSP found on foundry: NAME=devnp-rtl8169.so DESCRIPTION=Realtek 8169 ethernet driver DATE=2016/10/18-16:37:13-CDT STATE=stable HOST=psp650-linux-1.bts.rim.net USER=builder VERSION=1592 TAGID=PSP_networking_br650_be650SP1 This is my first attempt at using this driver on some new hardware. The issue is that the second thread of io-pkt-v4 will become RUNNING forever. It does not if there is no cable connected (or an IP adress is assigned). The issue seems somewhat random. After a reboot the problem shows up after a few seconds. But 1 reboot every 10 it looks ok. When io-pkt-v4 is in this state, any command such as nicinfo or ifconfig become reply blocked forever. Hugh, in case you are reading this. I could go through official support channel if you prefer, but given your participation here I was hoping to get rid of overhead ;-) Wed, 31 Jan 2018 19:43:19 GMT http://community.qnx.com/sf/go/post118505 Mario Charest 2018-01-31T19:43:19Z post118435: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118435 Good news! Glad it is working for you. Nicinfo only displays hardware errors, so you need to run netstat to see TCP/UDP problems. Hugh. On 2018-01-22, 6:02 PM, "Tim Sowden(deleted)" <community-noreply@qnx.com> wrote: > Seeing a lot of TCP retransmit timeouts & ICMP errors -- have you got a really > congested link or flapping port between the 2 hosts (possible bad switchport > or cable)? > > Under TCP stats: > tim: 4616 retransmit timeouts > tim1: 4616 retransmit timeouts > tim2: 4702 retransmit timeouts > > Under ICMP stats: > tim: 261737 calls to icmp_error > destination unreachable: 261737 > tim1: 261748 calls to icmp_error > destination unreachable: 261748 > tim2: 261933 calls to icmp_error > destination unreachable: 261933 > > The computers are all located in a lab with no outside access (closed network) so there is very little traffic. I swapped Ethernet cables on the chance it might have been a bad cable I used but that made no difference. I was *this* close to writing back when I decided to switch ports on the Netgear router. I hadn't done that prior because I had plugged into a port that was being used by another computer in our lab so I *knew* it was good. Of course the instant I plugged into another port everything started working in both directions. So the port must have been degraded after all and just no one has noticed because they haven't transferred any large amount of data in the bad direction on the prior machine that was plugged in there! One final question though. I swore prior versions of QNX showed these types of hardware issues in Nicinfo which was the first place I went when I was having problems. The fact there were no errors reported there made me think it was a configuration issue vs a hardware one. Thanks for all your help, Tim _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118432 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Tue, 23 Jan 2018 13:28:50 GMT http://community.qnx.com/sf/go/post118435 Hugh Brown 2018-01-23T13:28:50Z post118432: Re: RE: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118432 > Seeing a lot of TCP retransmit timeouts & ICMP errors -- have you got a really > congested link or flapping port between the 2 hosts (possible bad switchport > or cable)? > > Under TCP stats: > tim: 4616 retransmit timeouts > tim1: 4616 retransmit timeouts > tim2: 4702 retransmit timeouts > > Under ICMP stats: > tim: 261737 calls to icmp_error > destination unreachable: 261737 > tim1: 261748 calls to icmp_error > destination unreachable: 261748 > tim2: 261933 calls to icmp_error > destination unreachable: 261933 > > The computers are all located in a lab with no outside access (closed network) so there is very little traffic. I swapped Ethernet cables on the chance it might have been a bad cable I used but that made no difference. I was *this* close to writing back when I decided to switch ports on the Netgear router. I hadn't done that prior because I had plugged into a port that was being used by another computer in our lab so I *knew* it was good. Of course the instant I plugged into another port everything started working in both directions. So the port must have been degraded after all and just no one has noticed because they haven't transferred any large amount of data in the bad direction on the prior machine that was plugged in there! One final question though. I swore prior versions of QNX showed these types of hardware issues in Nicinfo which was the first place I went when I was having problems. The fact there were no errors reported there made me think it was a configuration issue vs a hardware one. Thanks for all your help, Tim Mon, 22 Jan 2018 23:23:17 GMT http://community.qnx.com/sf/go/post118432 Tim Sowden(deleted) 2018-01-22T23:23:17Z post118431: RE: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118431 Seeing a lot of TCP retransmit timeouts & ICMP errors -- have you got a really congested link or flapping port between the 2 hosts (possible bad switchport or cable)? Under TCP stats: tim: 4616 retransmit timeouts tim1: 4616 retransmit timeouts tim2: 4702 retransmit timeouts Under ICMP stats: tim: 261737 calls to icmp_error destination unreachable: 261737 tim1: 261748 calls to icmp_error destination unreachable: 261748 tim2: 261933 calls to icmp_error destination unreachable: 261933 Mon, 22 Jan 2018 22:20:50 GMT http://community.qnx.com/sf/go/post118431 Dean Denter 2018-01-22T22:20:50Z post118430: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118430 Hugh, We are using the 32 bit version of QNX 7. As an FYI, our recompiled (from 6.3) S/W on QNX 7 is crashing all over the place with suspicious memory related errors and we are allocating some large chunks (8 megs). Not sure if that means anything. Even the transfer of the results of the netstat commands is wildly varying. C:\Users\admin\transfer>ftp 172.27.12.3 Connected to 172.27.12.3. 220 172.27.12.3 FTP server (QNXNTO-ftpd 20081216) ready. 502 Unknown command 'UTF8'. User (172.27.12.3:(none)): qnxuser 331 User qnxuser accepted, provide password for qnxuser@SurfaceController. Password: 230-No directory! Logging in with home=/ 230 User qnxuser logged in. ftp> cd /fs/ram 250 CWD command successful. ftp> get tim 200 PORT command successful. 150 Opening ASCII mode data connection for 'tim' (12611 bytes). 226 Transfer complete. ftp: 13020 bytes received in 1.02Seconds 12.81Kbytes/sec. ftp> get tim1 200 PORT command successful. 150 Opening ASCII mode data connection for 'tim1' (12611 bytes). 226 Transfer complete. ftp: 13020 bytes received in 0.00Seconds 13020000.00Kbytes/sec. ftp> get tim2 200 PORT command successful. 150 Opening ASCII mode data connection for 'tim2' (12611 bytes). 226 Transfer complete. ftp: 13020 bytes received in 0.00Seconds 13020000.00Kbytes/sec. ftp> quit 221- Data traffic for this session was 39060 bytes in 3 files. Total traffic for this session was 39919 bytes in 3 transfers. 221 Thank you for using the FTP service on 172.27.12.3. File tim = state of system before doing anything (in case you needed base numbers) File tim1 = state of system after transferring file to QNX (the good direction) File tim2 = state of system after transferring file from QNX (the bad direction) Tim Mon, 22 Jan 2018 20:48:17 GMT http://community.qnx.com/sf/go/post118430 Tim Sowden(deleted) 2018-01-22T20:48:17Z post118429: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118429 Are you running 32-bit or 64-bit under QNX7? On 2018-01-22, 2:32 PM, "Tim Sowden(deleted)" <community-noreply@qnx.com> wrote: Hugh, > When you run the "get file" on the QNX7 machine, where is the file being > written to? The fact that the "put" works fine, tells me that the driver is > working fine. Maybe this could be due to a file system slowdown, so could you > try copying the large file to another large file on the same system and see if > it takes a while. The ftp 'get' command is being run on a Windows machine (ie getting the file from the QNX machine). That's why I showed the same transfer on a QNX 6 machine to indicate the Windows machine isn't the problem. It's transferring data *from* the QNX 7 machine that's slow. I've tried sending from a RAM drive in addition to the hard drive and it makes no difference so it's definitely not file I/O related. I don't think it's driver related either since the transfer is fine in the other direction. I'm certain there must be some network configuration option that's not right. I can install Wireshark and do a capture but I am not sure if that's useful or not. Tim _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118427 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 22 Jan 2018 20:08:20 GMT http://community.qnx.com/sf/go/post118429 Hugh Brown 2018-01-22T20:08:20Z post118428: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118428 Tim, After running the ftp in both directions on the QNX7 machine, can you please collect the output from "netstat -s" and send it to me? Thanks, Hugh. On 2018-01-22, 2:32 PM, "Tim Sowden(deleted)" <community-noreply@qnx.com> wrote: Hugh, > When you run the "get file" on the QNX7 machine, where is the file being > written to? The fact that the "put" works fine, tells me that the driver is > working fine. Maybe this could be due to a file system slowdown, so could you > try copying the large file to another large file on the same system and see if > it takes a while. The ftp 'get' command is being run on a Windows machine (ie getting the file from the QNX machine). That's why I showed the same transfer on a QNX 6 machine to indicate the Windows machine isn't the problem. It's transferring data *from* the QNX 7 machine that's slow. I've tried sending from a RAM drive in addition to the hard drive and it makes no difference so it's definitely not file I/O related. I don't think it's driver related either since the transfer is fine in the other direction. I'm certain there must be some network configuration option that's not right. I can install Wireshark and do a capture but I am not sure if that's useful or not. Tim _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118427 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 22 Jan 2018 20:03:09 GMT http://community.qnx.com/sf/go/post118428 Hugh Brown 2018-01-22T20:03:09Z post118427: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118427 Hugh, > When you run the "get file" on the QNX7 machine, where is the file being > written to? The fact that the "put" works fine, tells me that the driver is > working fine. Maybe this could be due to a file system slowdown, so could you > try copying the large file to another large file on the same system and see if > it takes a while. The ftp 'get' command is being run on a Windows machine (ie getting the file from the QNX machine). That's why I showed the same transfer on a QNX 6 machine to indicate the Windows machine isn't the problem. It's transferring data *from* the QNX 7 machine that's slow. I've tried sending from a RAM drive in addition to the hard drive and it makes no difference so it's definitely not file I/O related. I don't think it's driver related either since the transfer is fine in the other direction. I'm certain there must be some network configuration option that's not right. I can install Wireshark and do a capture but I am not sure if that's useful or not. Tim Mon, 22 Jan 2018 19:53:36 GMT http://community.qnx.com/sf/go/post118427 Tim Sowden(deleted) 2018-01-22T19:53:36Z post118426: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118426 When you run the "get file" on the QNX7 machine, where is the file being written to? The fact that the "put" works fine, tells me that the driver is working fine. Maybe this could be due to a file system slowdown, so could you try copying the large file to another large file on the same system and see if it takes a while. Thanks, Hugh. On 2018-01-22, 10:13 AM, "Tim Sowden(deleted)" <community-noreply@qnx.com> wrote: Hi Hugh, Below is the output you requested Unfortunately I can't run QNX 6 on this hardware because the version we use is 6.32 and it's too old to handle the SATA + Network etc. That's one of the reasons we are upgrading to QNX 7 because our old hardware is becoming obsolete and we can't get it anymore. So the QNX 6 test you saw was the same network but different computer hardware. One thing that is different is that in 6.32 we have a net.cfg file that contains all the I/P stuff including gateways / DNS servers etc. That file doesn't seem to exist in QNX 7 so as I showed in my prior post I just use ifconfig to set the IP and nothing else. But maybe it's gateway/DNS related since I don't set any of that specifically... B000:D00:F00 @ idx 0 vid/did: 8086/191f Intel Corporation, <device id - unknown> class/subclass/reg: 06/00/00 Host-to-PCI Bridge Device revid: 7 cmd/status registers: 6/2090 Capabilities: 09 (VEND) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 2015 B000:D02:F00 @ idx 1 vid/did: 8086/1912 Intel Corporation, <device id - unknown> class/subclass/reg: 03/00/00 PC Compatible VGA Display Controller revid: 6 cmd/status registers: 7/10 Capabilities: 09 (VEND) --> 10 (PCIe) --> 05 (MSI) --> 01 (PMI) --> * Address Space list - 3 assigned [0] MEM, addr=de000000, size=1000000, align: 1000000, attr: 64bit ENABLED [2] MEM, addr=c0000000, size=10000000, align: 10000000, attr: 64bit PREFETCH ENABLED [4] I/O, addr=f000, size=40, align: 40, attr: 16bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 2212 B000:D20:F00 @ idx 2 vid/did: 8086/a12f Intel Corporation, <device id - unknown> class/subclass/reg: 0c/03/30 USB Serial Bus Controller (Intel eXtensible HCI) revid: 49 cmd/status registers: 6/290 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df130000, size=10000, align: 10000, attr: 64bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 257 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D20:F02 @ idx 3 vid/did: 8086/a131 Intel Corporation, <device id - unknown> class/subclass/reg: 11/80/00 Other DA/DSP Controller revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df14e000, size=1000, align: 1000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D22:F00 @ idx 4 vid/did: 8086/a13a Intel Corporation, <device id - unknown> class/subclass/reg: 07/80/00 Other Simple Communications Controller revid: 49 cmd/status registers: 2/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df14d000, size=1000, align: 1000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 1999 B000:D23:F00 @ idx 5 vid/did: 8086/a102 Intel Corporation, <device id - unknown> class/subclass/reg: 01/06/01 SATA Mass Storage Controller (AHCI Interface) revid: 49 cmd/status registers: 7/2b0 Capabilities: 05 (MSI) --> 01 (PMI) --> 12 (SATA CFG) --> * Address Space list - 6 assigned [0] MEM, addr=df148000, size=2000, align: 2000, attr: 32bit ENABLED [1] MEM, addr=df14c000, size=100, align: 100, attr: 32bit ENABLED [2] I/O, addr=f090, size=8, align: 8, attr: 16bit ENABLED [3] I/O, addr=f080, size=4, align: 4, attr: 16bit ENABLED [4] I/O, addr=f060, size=20, align: 20, attr: 16bit ENABLED [5] MEM, addr=df14b000, size=800, align: 800, attr: 32bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 256 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D29:F00 @ idx 6 vid/did: 8086/a118 Intel Corporation, <device id - unknown> class/subclass/reg: 06/04/00 PCI-to-PCI Bridge Device revid: 241 cmd/status registers: 7/10 Capabilities: 10 (PCIe) --> 05 (MSI) --> 0d (BR SSVID) --> 01 (PMI) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 0/1/2 bridge control: 0x10 secondary status: 0x2000 mem base/limit: fff00000/fffff pfmem base/limit: fff00000/fffff I/O base/limit: f000/fff B000:D29:F01 @ idx 7 vid/did: 8086/a119 Intel Corporation, <device id - unknown> class/subclass/reg: 06/04/00 PCI-to-PCI Bridge Device revid: 241 cmd/status registers: 7/10 Capabilities: 10 (PCIe) --> 05 (MSI) --> 0d (BR SSVID) --> 01 (PMI) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 0/3/3 bridge control: 0x10 secondary status: 0x2000 mem base/limit: df000000/df0fffff pfmem base/limit: fff00000/fffff I/O base/limit: e000/efff B000:D31:F00 @ idx 8 vid/did: 8086/a143 Intel Corporation, <device id - unknown> class/subclass/reg: 06/01/00 PCI-to-ISA Bridge Device revid: 49 cmd/status registers: 7/200 Capabilities: * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F01 @ idx 9 vid/did: 8086/a120 Intel Corporation, <device id - unknown> class/subclass/reg: 05/80/00 Other Memory Controller revid: 49 cmd/status registers: 6/0 Capabilities: * Address Space list - 1 assigned [0] MEM, addr=fd000000, size=1000000, align: 1000000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F02 @ idx 10 vid/did: 8086/a121 Intel Corporation, <device id - unknown> class/subclass/reg: 05/80/00 Other Memory Controller revid: 49 cmd/status registers: 6/0 Capabilities: * Address Space list - 1 assigned [0] MEM, addr=df144000, size=4000, align: 4000, attr: 32bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F03 @ idx 11 vid/did: 8086/a170 Intel Corporation, <device id - unknown> class/subclass/reg: 04/03/00 Mixed Mode Multi-media Device revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 2 assigned [0] MEM, addr=df140000, size=4000, align: 4000, attr: 64bit ENABLED [4] MEM, addr=df120000, size=10000, align: 10000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F04 @ idx 12 vid/did: 8086/a123 Intel Corporation, <device id - unknown> class/subclass/reg: 0c/05/00 SMBus Serial Bus Controller revid: 49 cmd/status registers: 3/280 Capabilities: * Address Space list - 2 assigned [0] MEM, addr=df14a000, size=100, align: 100, attr: 64bit ENABLED [4] I/O, addr=f040, size=20, align: 20, attr: 16bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F06 @ idx 13 vid/did: 8086/15b8 Intel Corporation, <device id - unknown> class/subclass/reg: 02/00/00 Ethernet Network Controller revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> 13 (AF) --> * Address Space list - 1 assigned [0] MEM, addr=df100000, size=20000, align: 20000, attr: 32bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 258 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 0000 B001:D00:F00 @ idx 14 in slot 12 of chassis 0 vid/did: 1283/8892 Waldo, <device id - unknown> class/subclass/reg: 06/04/01 PCI-to-PCI Bridge Device (+ Subtractive Decode) revid: 113 cmd/status registers: 7/10 Capabilities: 01 (PMI) --> 0d (BR SSVID) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 1/2/2 bridge control: 0x210 secondary status: 0x2220 mem base/limit: fff00000/fffff pfmem base/limit: fff00000/fffff I/O base/limit: fff000/fff B003:D00:F00 @ idx 15 in slot 13 of chassis 0 vid/did: 8086/1539 Intel Corporation, <device id - unknown> class/subclass/reg: 02/00/00 Ethernet Network Controller revid: 3 cmd/status registers: 7/10 Capabilities: 01 (PMI) --> 05 (MSI) --> 11 (MSI-X) --> 10 (PCIe) --> * Address Space list - 3 assigned [0] MEM, addr=df000000, size=20000, align: 20000, attr: 32bit ENABLED [2] I/O, addr=e000, size=20, align: 20, attr: 32bit ENABLED [3] MEM, addr=df023000, size=1000, align: 4000, attr: 32bit ENABLED Interrupt list - 3 assigned (MSI-X) Interrupt 0 on IRQ 259 Interrupt 1 on IRQ 260 Interrupt 2 on IRQ 261 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 0000 TIA, Tim _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118421 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 22 Jan 2018 19:32:06 GMT http://community.qnx.com/sf/go/post118426 Hugh Brown 2018-01-22T19:32:06Z post118421: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118421 Hi Hugh, Below is the output you requested Unfortunately I can't run QNX 6 on this hardware because the version we use is 6.32 and it's too old to handle the SATA + Network etc. That's one of the reasons we are upgrading to QNX 7 because our old hardware is becoming obsolete and we can't get it anymore. So the QNX 6 test you saw was the same network but different computer hardware. One thing that is different is that in 6.32 we have a net.cfg file that contains all the I/P stuff including gateways / DNS servers etc. That file doesn't seem to exist in QNX 7 so as I showed in my prior post I just use ifconfig to set the IP and nothing else. But maybe it's gateway/DNS related since I don't set any of that specifically... B000:D00:F00 @ idx 0 vid/did: 8086/191f Intel Corporation, <device id - unknown> class/subclass/reg: 06/00/00 Host-to-PCI Bridge Device revid: 7 cmd/status registers: 6/2090 Capabilities: 09 (VEND) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 2015 B000:D02:F00 @ idx 1 vid/did: 8086/1912 Intel Corporation, <device id - unknown> class/subclass/reg: 03/00/00 PC Compatible VGA Display Controller revid: 6 cmd/status registers: 7/10 Capabilities: 09 (VEND) --> 10 (PCIe) --> 05 (MSI) --> 01 (PMI) --> * Address Space list - 3 assigned [0] MEM, addr=de000000, size=1000000, align: 1000000, attr: 64bit ENABLED [2] MEM, addr=c0000000, size=10000000, align: 10000000, attr: 64bit PREFETCH ENABLED [4] I/O, addr=f000, size=40, align: 40, attr: 16bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 2212 B000:D20:F00 @ idx 2 vid/did: 8086/a12f Intel Corporation, <device id - unknown> class/subclass/reg: 0c/03/30 USB Serial Bus Controller (Intel eXtensible HCI) revid: 49 cmd/status registers: 6/290 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df130000, size=10000, align: 10000, attr: 64bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 257 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D20:F02 @ idx 3 vid/did: 8086/a131 Intel Corporation, <device id - unknown> class/subclass/reg: 11/80/00 Other DA/DSP Controller revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df14e000, size=1000, align: 1000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D22:F00 @ idx 4 vid/did: 8086/a13a Intel Corporation, <device id - unknown> class/subclass/reg: 07/80/00 Other Simple Communications Controller revid: 49 cmd/status registers: 2/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 1 assigned [0] MEM, addr=df14d000, size=1000, align: 1000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 1999 B000:D23:F00 @ idx 5 vid/did: 8086/a102 Intel Corporation, <device id - unknown> class/subclass/reg: 01/06/01 SATA Mass Storage Controller (AHCI Interface) revid: 49 cmd/status registers: 7/2b0 Capabilities: 05 (MSI) --> 01 (PMI) --> 12 (SATA CFG) --> * Address Space list - 6 assigned [0] MEM, addr=df148000, size=2000, align: 2000, attr: 32bit ENABLED [1] MEM, addr=df14c000, size=100, align: 100, attr: 32bit ENABLED [2] I/O, addr=f090, size=8, align: 8, attr: 16bit ENABLED [3] I/O, addr=f080, size=4, align: 4, attr: 16bit ENABLED [4] I/O, addr=f060, size=20, align: 20, attr: 16bit ENABLED [5] MEM, addr=df14b000, size=800, align: 800, attr: 32bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 256 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D29:F00 @ idx 6 vid/did: 8086/a118 Intel Corporation, <device id - unknown> class/subclass/reg: 06/04/00 PCI-to-PCI Bridge Device revid: 241 cmd/status registers: 7/10 Capabilities: 10 (PCIe) --> 05 (MSI) --> 0d (BR SSVID) --> 01 (PMI) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 0/1/2 bridge control: 0x10 secondary status: 0x2000 mem base/limit: fff00000/fffff pfmem base/limit: fff00000/fffff I/O base/limit: f000/fff B000:D29:F01 @ idx 7 vid/did: 8086/a119 Intel Corporation, <device id - unknown> class/subclass/reg: 06/04/00 PCI-to-PCI Bridge Device revid: 241 cmd/status registers: 7/10 Capabilities: 10 (PCIe) --> 05 (MSI) --> 0d (BR SSVID) --> 01 (PMI) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 0/3/3 bridge control: 0x10 secondary status: 0x2000 mem base/limit: df000000/df0fffff pfmem base/limit: fff00000/fffff I/O base/limit: e000/efff B000:D31:F00 @ idx 8 vid/did: 8086/a143 Intel Corporation, <device id - unknown> class/subclass/reg: 06/01/00 PCI-to-ISA Bridge Device revid: 49 cmd/status registers: 7/200 Capabilities: * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F01 @ idx 9 vid/did: 8086/a120 Intel Corporation, <device id - unknown> class/subclass/reg: 05/80/00 Other Memory Controller revid: 49 cmd/status registers: 6/0 Capabilities: * Address Space list - 1 assigned [0] MEM, addr=fd000000, size=1000000, align: 1000000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F02 @ idx 10 vid/did: 8086/a121 Intel Corporation, <device id - unknown> class/subclass/reg: 05/80/00 Other Memory Controller revid: 49 cmd/status registers: 6/0 Capabilities: * Address Space list - 1 assigned [0] MEM, addr=df144000, size=4000, align: 4000, attr: 32bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F03 @ idx 11 vid/did: 8086/a170 Intel Corporation, <device id - unknown> class/subclass/reg: 04/03/00 Mixed Mode Multi-media Device revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> * Address Space list - 2 assigned [0] MEM, addr=df140000, size=4000, align: 4000, attr: 64bit ENABLED [4] MEM, addr=df120000, size=10000, align: 10000, attr: 64bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F04 @ idx 12 vid/did: 8086/a123 Intel Corporation, <device id - unknown> class/subclass/reg: 0c/05/00 SMBus Serial Bus Controller revid: 49 cmd/status registers: 3/280 Capabilities: * Address Space list - 2 assigned [0] MEM, addr=df14a000, size=100, align: 100, attr: 64bit ENABLED [4] I/O, addr=f040, size=20, align: 20, attr: 16bit ENABLED Interrupt list - 0 assigned hdrType: 0 ssvid: 8086 Intel Corporation ssid: 7270 B000:D31:F06 @ idx 13 vid/did: 8086/15b8 Intel Corporation, <device id - unknown> class/subclass/reg: 02/00/00 Ethernet Network Controller revid: 49 cmd/status registers: 6/10 Capabilities: 01 (PMI) --> 05 (MSI) --> 13 (AF) --> * Address Space list - 1 assigned [0] MEM, addr=df100000, size=20000, align: 20000, attr: 32bit ENABLED Interrupt list - 1 assigned (MSI) Interrupt 0 on IRQ 258 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 0000 B001:D00:F00 @ idx 14 in slot 12 of chassis 0 vid/did: 1283/8892 Waldo, <device id - unknown> class/subclass/reg: 06/04/01 PCI-to-PCI Bridge Device (+ Subtractive Decode) revid: 113 cmd/status registers: 7/10 Capabilities: 01 (PMI) --> 0d (BR SSVID) --> * Address Space list - 0 assigned Interrupt list - 0 assigned hdrType: 1 primary/secondary/subordinate bus numbers: 1/2/2 bridge control: 0x210 secondary status: 0x2220 mem base/limit: fff00000/fffff pfmem base/limit: fff00000/fffff I/O base/limit: fff000/fff B003:D00:F00 @ idx 15 in slot 13 of chassis 0 vid/did: 8086/1539 Intel Corporation, <device id - unknown> class/subclass/reg: 02/00/00 Ethernet Network Controller revid: 3 cmd/status registers: 7/10 Capabilities: 01 (PMI) --> 05 (MSI) --> 11 (MSI-X) --> 10 (PCIe) --> * Address Space list - 3 assigned [0] MEM, addr=df000000, size=20000, align: 20000, attr: 32bit ENABLED [2] I/O, addr=e000, size=20, align: 20, attr: 32bit ENABLED [3] MEM, addr=df023000, size=1000, align: 4000, attr: 32bit ENABLED Interrupt list - 3 assigned (MSI-X) Interrupt 0 on IRQ 259 Interrupt 1 on IRQ 260 Interrupt 2 on IRQ 261 hdrType: 0 ssvid: 8086 Intel Corporation ssid: 0000 TIA, Tim Mon, 22 Jan 2018 15:34:00 GMT http://community.qnx.com/sf/go/post118421 Tim Sowden(deleted) 2018-01-22T15:34:00Z post118419: Re: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118419 The QNX6 and QNX7 e1000 drivers should be identical, so I don't know why you are having the slowdown on the QNX7 machine. Can you run QNX6 on that machine and see if the driver behaves the same? Also, please post the output from "pci-tool -vv" from the QNX7 machine. Thanks, Hugh. On 2018-01-19, 6:01 PM, "Tim Sowden(deleted)" <community-noreply@qnx.com> wrote: I have a QNX 7 machine with dual Intel NIC cards in it only 1 of which is plugged in. Transferring files from this machine is incredibly slow but sending files is lightning fast. I have another older QNX 6 machine (different NIC hardware) on the same network and transferring files in both directions is fine. QNX 6 Machine: C:\Users\admin\transfer>ftp 172.27.12.103 Connected to 172.27.12.103. 220 172.27.12.103 FTP server ready. 502 Unknown command UTF8. User (172.27.12.103:(none)): root 331 Password required for root. Password: 230- Welcome to QNX Neutrino! 230 User root logged in. ftp> bin 200 Type set to I. ftp> put deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core'. 226 Transfer complete. ftp: 11628544 bytes sent in 0.38Seconds 31009.45Kbytes/sec. ftp> get deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core' (11628544 bytes). 226 Transfer complete. ftp: 11628544 bytes received in 0.66Seconds 17726.44Kbytes/sec. ftp> QNX 7 machine C:\Users\admin\transfer>ftp 172.27.12.3 Connected to 172.27.12.3. 220 172.27.12.3 FTP server (QNXNTO-ftpd 20081216) ready. 502 Unknown command 'UTF8'. User (172.27.12.3:(none)): qnxuser 331 User qnxuser accepted, provide password for qnxuser@SurfaceController. Password: 230-No directory! Logging in with home=/ 230 User qnxuser logged in. ftp> cd /tmp 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core'. 226 Transfer complete. ftp: 11628544 bytes sent in 0.38Seconds 31009.45Kbytes/sec. ftp> get deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core' (11628544 bytes). 226 Transfer complete. ftp: 11628544 bytes received in 73.31Seconds 158.62Kbytes/sec. ftp> Note that incredibly slow transfer rate! I start networking to use just 1 NIC with a static IP io-pkt-v6-hc -d e1000 & if_up -r 10 -p wm1 ifconfig wm0 up ifconfig wm1 172.27.12.3 up inetd & Ifconfig reports: # ifconfig -a lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 wm0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:0b:ab:d6:bd:15 media: Ethernet none inet6 fe80::20b:abff:fed6:bd15%wm0 prefixlen 64 scopeid 0x11 wm1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:0b:ab:d6:bd:16 media: Ethernet autoselect (1000baseT full-duplex,flowcontrol,rxpause,txpause) status: active inet 172.27.12.3 netmask 0xffff0000 broadcast 172.27.255.255 inet6 fe80::20b:abff:fed6:bd16%wm1 prefixlen 64 scopeid 0x12 Nicinfo looks fine (gigabit, full duplex, no physical medium errors) # nicinfo wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 000BAB D6BD15 Current Physical Node ID ................... 000BAB D6BD15 Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 2 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0xdf100000 - 0xdf11ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 0 Bytes Received OK .......................... 0 Broadcast Packets Received OK .............. 0 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 wm1: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Physical Node ID ........................... 000BAB D6BD16 Current Physical Node ID ................... 000BAB D6BD16 Current Operation Rate ..................... 1000.00 Mb/s full-duplex Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x103 Hardware Interrupt ......................... 0x104 Hardware Interrupt ......................... 0x105 Memory Aperture ............................ 0xdf000000 - 0xdf01ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 542622 Bytes Transmitted OK ....................... 165407946 Broadcast Packets Transmitted OK ........... 4 Multicast Packets Transmitted OK ........... 2 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 708700 Bytes Received OK .......................... 471998324 Broadcast Packets Received OK .............. 101210 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 This is a closed network so there is no DNS machine and I didn't specify any gateway or netmask. Do I need to do that or setup some transfer buffer someplace to speed up the transfer speed? TIA, Tim _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118417 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Mon, 22 Jan 2018 13:10:39 GMT http://community.qnx.com/sf/go/post118419 Hugh Brown 2018-01-22T13:10:39Z post118417: QNX 7 E1000 Driver Very Slow Sending Only http://community.qnx.com/sf/go/post118417 I have a QNX 7 machine with dual Intel NIC cards in it only 1 of which is plugged in. Transferring files from this machine is incredibly slow but sending files is lightning fast. I have another older QNX 6 machine (different NIC hardware) on the same network and transferring files in both directions is fine. QNX 6 Machine: C:\Users\admin\transfer>ftp 172.27.12.103 Connected to 172.27.12.103. 220 172.27.12.103 FTP server ready. 502 Unknown command UTF8. User (172.27.12.103:(none)): root 331 Password required for root. Password: 230- Welcome to QNX Neutrino! 230 User root logged in. ftp> bin 200 Type set to I. ftp> put deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core'. 226 Transfer complete. ftp: 11628544 bytes sent in 0.38Seconds 31009.45Kbytes/sec. ftp> get deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core' (11628544 bytes). 226 Transfer complete. ftp: 11628544 bytes received in 0.66Seconds 17726.44Kbytes/sec. ftp> QNX 7 machine C:\Users\admin\transfer>ftp 172.27.12.3 Connected to 172.27.12.3. 220 172.27.12.3 FTP server (QNXNTO-ftpd 20081216) ready. 502 Unknown command 'UTF8'. User (172.27.12.3:(none)): qnxuser 331 User qnxuser accepted, provide password for qnxuser@SurfaceController. Password: 230-No directory! Logging in with home=/ 230 User qnxuser logged in. ftp> cd /tmp 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core'. 226 Transfer complete. ftp: 11628544 bytes sent in 0.38Seconds 31009.45Kbytes/sec. ftp> get deviceMgr.core 200 PORT command successful. 150 Opening BINARY mode data connection for 'deviceMgr.core' (11628544 bytes). 226 Transfer complete. ftp: 11628544 bytes received in 73.31Seconds 158.62Kbytes/sec. ftp> Note that incredibly slow transfer rate! I start networking to use just 1 NIC with a static IP io-pkt-v6-hc -d e1000 & if_up -r 10 -p wm1 ifconfig wm0 up ifconfig wm1 172.27.12.3 up inetd & Ifconfig reports: # ifconfig -a lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33192 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 wm0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:0b:ab:d6:bd:15 media: Ethernet none inet6 fe80::20b:abff:fed6:bd15%wm0 prefixlen 64 scopeid 0x11 wm1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 capabilities rx=1f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM> capabilities tx=7f<IP4CSUM,TCP4CSUM,UDP4CSUM,TCP6CSUM,UDP6CSUM,TSO4,TSO6> enabled=0 address: 00:0b:ab:d6:bd:16 media: Ethernet autoselect (1000baseT full-duplex,flowcontrol,rxpause,txpause) status: active inet 172.27.12.3 netmask 0xffff0000 broadcast 172.27.255.255 inet6 fe80::20b:abff:fed6:bd16%wm1 prefixlen 64 scopeid 0x12 Nicinfo looks fine (gigabit, full duplex, no physical medium errors) # nicinfo wm0: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Link is DOWN Physical Node ID ........................... 000BAB D6BD15 Current Physical Node ID ................... 000BAB D6BD15 Current Operation Rate ..................... Unknown Active Interface Type ...................... MII Active PHY address ....................... 2 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x102 Memory Aperture ............................ 0xdf100000 - 0xdf11ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 0 Bytes Transmitted OK ....................... 0 Broadcast Packets Transmitted OK ........... 0 Multicast Packets Transmitted OK ........... 0 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 0 Bytes Received OK .......................... 0 Broadcast Packets Received OK .............. 0 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 wm1: INTEL PRO/1000 Gigabit (Copper) Ethernet Controller Physical Node ID ........................... 000BAB D6BD16 Current Physical Node ID ................... 000BAB D6BD16 Current Operation Rate ..................... 1000.00 Mb/s full-duplex Active Interface Type ...................... MII Active PHY address ....................... 1 Maximum Transmittable data Unit ............ 1500 Maximum Receivable data Unit ............... 1500 Hardware Interrupt ......................... 0x103 Hardware Interrupt ......................... 0x104 Hardware Interrupt ......................... 0x105 Memory Aperture ............................ 0xdf000000 - 0xdf01ffff Promiscuous Mode ........................... Off Multicast Support .......................... Enabled Packets Transmitted OK ..................... 542622 Bytes Transmitted OK ....................... 165407946 Broadcast Packets Transmitted OK ........... 4 Multicast Packets Transmitted OK ........... 2 Memory Allocation Failures on Transmit ..... 0 Packets Received OK ........................ 708700 Bytes Received OK .......................... 471998324 Broadcast Packets Received OK .............. 101210 Multicast Packets Received OK .............. 0 Memory Allocation Failures on Receive ...... 0 Single Collisions on Transmit .............. 0 Multiple Collisions on Transmit ............ 0 Deferred Transmits ......................... 0 Late Collision on Transmit errors .......... 0 Transmits aborted (excessive collisions) ... 0 Jabber detected ............................ 0 Receive Alignment errors ................... 0 Received packets with CRC errors ........... 0 Packets Dropped on receive ................. 0 Oversized Packets received ................. 0 Short packets .............................. 0 Squelch Test errors ........................ 0 Invalid Symbol Errors ...................... 0 This is a closed network so there is no DNS machine and I didn't specify any gateway or netmask. Do I need to do that or setup some transfer buffer someplace to speed up the transfer speed? TIA, Tim Fri, 19 Jan 2018 23:22:02 GMT http://community.qnx.com/sf/go/post118417 Tim Sowden(deleted) 2018-01-19T23:22:02Z post118403: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118403 Hello Armin, I'm not using a 32bit Driver with the 64bit environment. I am using the 32bit driver with the 32bit environment. The problem is that the 32bit driver in a 32bit environment does not work on a Dell R720 server, but that same server is able to run the 64bit drivers and environment with no issues. Some would say, just use the 64bit environment then. My response to them is that I have a requirement to use the 32bit architecture. My images are all created using QNX 7.0 standard images that came with QNX 7.0 Momentics. This is my QNX Version 7.0.0.v201705161739 Thanks, Scott Thu, 18 Jan 2018 15:26:53 GMT http://community.qnx.com/sf/go/post118403 Scott Poulin(deleted) 2018-01-18T15:26:53Z post118402: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118402 It's in general not a good idea to use a 32bit driver within a 64bit environment. Other operating systems are providing 32bit libs for that purpose. What about QNX 7.0 ?? Armin Scott Poulin schrieb: > Also there is an error appearing in the slog2info with the dlopen_mod(), dlopen() > it is stating Library cannot be found? > > > > > > _______________________________________________ > > Networking Drivers > http://community.qnx.com/sf/go/post118338 > To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 18 Jan 2018 15:07:21 GMT http://community.qnx.com/sf/go/post118402 Armin Steinhoff 2018-01-18T15:07:21Z post118339: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118339 Scott, Don’t worry about the pci_cap-0x10, but you should have one for pci_cap-0x11 as this is MSIX. Now that you can mount a USB stick, can you please do the following: pidin in > /path_to_usb/file pci-tool –vv >> /path_to_usb/file use –i /proc/boot/* >> /path_to_usb/file pidin arg >> /path_to_usb/file Please send me “file” from the USB stick. Thanks, Hugh. On 2018-01-05, 2:36 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: Hey Hugh, it appears that I am able to mount a usb drive and there doesn't appear to be issues there. I happen to be going through the slog2info and found the attached error. It looks like that there is an issue with the pci_cap-0x10 or pci_cap-0x11 it states that pci_log says that the pci_cap-0x10 file can't be found for my nics. I also tested this image an alternate server. It was a different model/make server, but it was able to boot the image and use the nic and io-blk drivers. So from the image provided, is there an issue with the NIC and driver compatibility? Thanks, Scott _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118337 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Fri, 05 Jan 2018 20:11:08 GMT http://community.qnx.com/sf/go/post118339 Hugh Brown 2018-01-05T20:11:08Z post118338: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118338 Also there is an error appearing in the slog2info with the dlopen_mod(), dlopen() it is stating Library cannot be found? Fri, 05 Jan 2018 19:59:11 GMT http://community.qnx.com/sf/go/post118338 Scott Poulin(deleted) 2018-01-05T19:59:11Z post118337: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118337 Hey Hugh, it appears that I am able to mount a usb drive and there doesn't appear to be issues there. I happen to be going through the slog2info and found the attached error. It looks like that there is an issue with the pci_cap-0x10 or pci_cap-0x11 it states that pci_log says that the pci_cap-0x10 file can't be found for my nics. I also tested this image an alternate server. It was a different model/make server, but it was able to boot the image and use the nic and io-blk drivers. So from the image provided, is there an issue with the NIC and driver compatibility? Thanks, Scott Fri, 05 Jan 2018 19:57:16 GMT http://community.qnx.com/sf/go/post118337 Scott Poulin(deleted) 2018-01-05T19:57:16Z post118334: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118334 Scott, Are you able to insert a USB stick into the machine and mount it? Please let me know, as I would like to capture more information, rather than screen shots. Thanks, Hugh. On 2018-01-04, 9:53 AM, "Scott Poulin" <community-noreply@qnx.com> wrote: Hugh, When it loads I don't see any errors. Everything works as it should. I am able to run a pidin in, pidin arg, and ls -lR without any errors. The only error I see is when I run the ifconfig and nicinfo -v pertaining to the bad address of the nics. The first image is the pidin in second image is pidin arg _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118333 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 04 Jan 2018 15:38:50 GMT http://community.qnx.com/sf/go/post118334 Hugh Brown 2018-01-04T15:38:50Z post118333: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118333 Hugh, When it loads I don't see any errors. Everything works as it should. I am able to run a pidin in, pidin arg, and ls -lR without any errors. The only error I see is when I run the ifconfig and nicinfo -v pertaining to the bad address of the nics. The first image is the pidin in second image is pidin arg Thu, 04 Jan 2018 15:14:14 GMT http://community.qnx.com/sf/go/post118333 Scott Poulin(deleted) 2018-01-04T15:14:14Z post118332: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118332 I wonder why there are no compatibility libs available for running 32bit drivers within a 64bit environment and vs. Please explain .. Armin Hugh Brown schrieb: > Scott, > > Do all commands on your system fail under 32-bit, or only some of them? Do you see any errors on the screen while booting the system? Does USB work? > Please can you send me the output from “pidin in” and “pidin arg”. Can you run “ls –lR /” to see if the file system recurses? > I’m trying to figure out why 32-bit isn’t working, but 64-bit is. > > Thanks, Hugh. > > On 2018-01-03, 4:46 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: > > Hugh, > > I built an image from the binary you sent me and I am getting the same result. When the image boots it actually doesn't start the network driver. When I start the network driver it gives me the same result as before. I did notice something odd and I think it may be something with pci-server or when I create the image.(Not sure yet) > But when I go to cat the pcidatabase.com-tab_delimited.txt file it throws a bad address. > > Attached is the cat of the pcidatabase.com-tab_delimited.txt file. > > Thanks, > Scott > > > > _______________________________________________ > > Networking Drivers > http://community.qnx.com/sf/go/post118327 > To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com > > > > > > > _______________________________________________ > > Networking Drivers > http://community.qnx.com/sf/go/post118331 > To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 04 Jan 2018 14:57:35 GMT http://community.qnx.com/sf/go/post118332 Armin Steinhoff 2018-01-04T14:57:35Z post118331: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118331 Scott, Do all commands on your system fail under 32-bit, or only some of them? Do you see any errors on the screen while booting the system? Does USB work? Please can you send me the output from “pidin in” and “pidin arg”. Can you run “ls –lR /” to see if the file system recurses? I’m trying to figure out why 32-bit isn’t working, but 64-bit is. Thanks, Hugh. On 2018-01-03, 4:46 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: Hugh, I built an image from the binary you sent me and I am getting the same result. When the image boots it actually doesn't start the network driver. When I start the network driver it gives me the same result as before. I did notice something odd and I think it may be something with pci-server or when I create the image.(Not sure yet) But when I go to cat the pcidatabase.com-tab_delimited.txt file it throws a bad address. Attached is the cat of the pcidatabase.com-tab_delimited.txt file. Thanks, Scott _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118327 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Thu, 04 Jan 2018 13:27:12 GMT http://community.qnx.com/sf/go/post118331 Hugh Brown 2018-01-04T13:27:12Z post118327: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118327 Hugh, I built an image from the binary you sent me and I am getting the same result. When the image boots it actually doesn't start the network driver. When I start the network driver it gives me the same result as before. I did notice something odd and I think it may be something with pci-server or when I create the image.(Not sure yet) But when I go to cat the pcidatabase.com-tab_delimited.txt file it throws a bad address. Attached is the cat of the pcidatabase.com-tab_delimited.txt file. Thanks, Scott Wed, 03 Jan 2018 22:07:33 GMT http://community.qnx.com/sf/go/post118327 Scott Poulin(deleted) 2018-01-03T22:07:33Z post118326: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118326 I have attached my boot image to this email. Please can you try it? Thanks, Hugh. On 2018-01-03, 2:56 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: It occurs for both the x540 and i350 only on 32bit. I have not tried the x540 for 64bit, but I know the i350 works for the 64bit using the e1000 driver and io-pkt-v4-hc. I believe the SDP I am using is the official release of QNX7. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118325 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 20:21:38 GMT http://community.qnx.com/sf/go/post118326 Hugh Brown 2018-01-03T20:21:38Z post118325: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118325 It occurs for both the x540 and i350 only on 32bit. I have not tried the x540 for 64bit, but I know the i350 works for the 64bit using the e1000 driver and io-pkt-v4-hc. I believe the SDP I am using is the official release of QNX7. Wed, 03 Jan 2018 20:17:32 GMT http://community.qnx.com/sf/go/post118325 Scott Poulin(deleted) 2018-01-03T20:17:32Z post118324: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118324 Does the problem occur with both the x540 and i350? If so, then this isn’t a driver problem. Have you updated to the official SDP7 release? On 2018-01-03, 2:42 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: So I went and wrote the image to a harddrive and booted from the harddrive. I achieved the same results as before. I checked the pci-tool and it shows the nic is there. For some reason, there is still a compatibility issue with the integrated nic x540/i350 I ran the ixgbe for the x540 and e1000 for i350. I even used the did and vid when starting the driver. _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118323 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 20:05:55 GMT http://community.qnx.com/sf/go/post118324 Hugh Brown 2018-01-03T20:05:55Z post118323: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118323 So I went and wrote the image to a harddrive and booted from the harddrive. I achieved the same results as before. I checked the pci-tool and it shows the nic is there. For some reason, there is still a compatibility issue with the integrated nic x540/i350 I ran the ixgbe for the x540 and e1000 for i350. I even used the did and vid when starting the driver. Wed, 03 Jan 2018 20:03:28 GMT http://community.qnx.com/sf/go/post118323 Scott Poulin(deleted) 2018-01-03T20:03:28Z post118322: RE: RE: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118322 should be "device id" not "base address" -- the output from "pci" or "pci -v" should indicate the device ID of your NIC, I usually see them defined in the "net-start.sh" script in the build file. ________________________________________ From: Scott Poulin [community-noreply@qnx.com] Sent: Wednesday, January 03, 2018 1:34 PM To: drivers-networking Subject: Re: RE: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 Hello Dean, Can you elaborate what you mean the base address is missing in my build file? Thanks, Scott _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118319 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 18:58:46 GMT http://community.qnx.com/sf/go/post118322 Dean Denter 2018-01-03T18:58:46Z post118321: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118321 This sounds as though there could be an incompatibility issue with your virtual machine. I ran on a native PC. The BSP that you downloaded from QNX should be OK, but to be on the safe side, you should be running the official release of SDP7. Hugh. On 2018-01-03, 1:16 PM, "Scott Poulin" <community-noreply@qnx.com> wrote: Hello Hugh, So I took your build image and created an image from it and included the 32 bit e1000 driver as well. I get the same results as before. Is there something wrong with the BSP that I downloaded from QNX. Also there shouldn't be any issues if I am running the image as a virtual image via a DRAC console correct? It treats it as if it were a disk image on a harddrive so i figure there shouldn't be any issues from this. Thanks, Scott _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118320 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 18:40:47 GMT http://community.qnx.com/sf/go/post118321 Hugh Brown 2018-01-03T18:40:47Z post118320: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118320 Hello Hugh, So I took your build image and created an image from it and included the 32 bit e1000 driver as well. I get the same results as before. Is there something wrong with the BSP that I downloaded from QNX. Also there shouldn't be any issues if I am running the image as a virtual image via a DRAC console correct? It treats it as if it were a disk image on a harddrive so i figure there shouldn't be any issues from this. Thanks, Scott Wed, 03 Jan 2018 18:37:20 GMT http://community.qnx.com/sf/go/post118320 Scott Poulin(deleted) 2018-01-03T18:37:20Z post118319: Re: RE: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118319 Hello Dean, Can you elaborate what you mean the base address is missing in my build file? Thanks, Scott Wed, 03 Jan 2018 18:34:19 GMT http://community.qnx.com/sf/go/post118319 Scott Poulin(deleted) 2018-01-03T18:34:19Z post118318: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118318 io-pkt-v4-hc was dropped from the official release of SDP7, so only io-pkt-v6-hc is supported. On 2018-01-03, 11:58 AM, "Scott Poulin" <community-noreply@qnx.com> wrote: I guess what is the difference between io-pkt-v4-hc and io-pkt-v6-hc? In the alpha of SDP7 I was able to use io-pkt-v4-hc in the x86_64 bit version? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118315 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 18:11:29 GMT http://community.qnx.com/sf/go/post118318 Hugh Brown 2018-01-03T18:11:29Z post118317: RE: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118317 Both io-pkt-v4-hc and io-pkt-v6-hc are supported on both 32bit & 64bit targets in SDP7. You should not use io-pkt-v4 in SDP7. For your e1000 issue, check the pci-server configuration -- there may be a base address missing from your build file. 'pci -v' output may help troubleshoot this. regards, Dean. ________________________________________ From: Scott Poulin [community-noreply@qnx.com] Sent: Wednesday, January 03, 2018 12:19 PM To: drivers-networking Subject: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 I guess what is the difference between io-pkt-v4-hc and io-pkt-v6-hc? In the alpha of SDP7 I was able to use io-pkt-v4-hc in the x86_64 bit version? _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118315 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 17:56:44 GMT http://community.qnx.com/sf/go/post118317 Dean Denter 2018-01-03T17:56:44Z post118315: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118315 I guess what is the difference between io-pkt-v4-hc and io-pkt-v6-hc? In the alpha of SDP7 I was able to use io-pkt-v4-hc in the x86_64 bit version? Wed, 03 Jan 2018 17:19:13 GMT http://community.qnx.com/sf/go/post118315 Scott Poulin(deleted) 2018-01-03T17:19:13Z post118314: Re: QNX7 devnp-e1000.so works for Intel Integrated I350 NIC for x86_64 but not x86_32 http://community.qnx.com/sf/go/post118314 You should only use the io-pkt-v6-hc driver with SDP7. On 2018-01-03, 11:37 AM, "Scott Poulin" <community-noreply@qnx.com> wrote: Also I am using the io-pkt-v4-hc driver instead of the io-pkt-v6-hc driver _______________________________________________ Networking Drivers http://community.qnx.com/sf/go/post118312 To cancel your subscription to this discussion, please e-mail drivers-networking-unsubscribe@community.qnx.com Wed, 03 Jan 2018 17:04:56 GMT http://community.qnx.com/sf/go/post118314 Hugh Brown 2018-01-03T17:04:56Z