添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello! I have been starting to play with nng and I have a question about the performance of ipc vs tcp. 🙂

Describe the question

It is similar to the question that was asked in #1362 but while measuring the performance different between ipc vs tcp in the context of req/rep on a single client server.

The setup is relatively simple: a client and server running on the same machine, the client sends a number of 128 byte messages to a server and the server returns the same messages. Testing this scenario with both tcp and icp.

Here are the results:

===========================================================================================
Benchmarking ipc - Data Size = 128
ipc: 19525.846832071937 request_reply/s
ipc: 51.214168 µs/request_reply
===========================================================================================
Benchmarking tcp - Data Size = 128
tcp: 22674.175476849814 request_reply/s
tcp: 44.103037 µs/request_reply

So, I'm curious about why ipc, which from what I understand in nng is relying on named pipes on Windows, seems to be 10% slower than the local tcp socket version. Any ideas?
(It could be that I wrote something completely wrong, details at the bottom)

** Environment Details **

  • NNG version 1.5.2
  • Windows 11 x64 -
  • VS2019 x 64
  • Shared library
  • Additional context

    Code details in C#

    I don't have a C program, but I wrote a C# program with my own raw wrapper of nng, which should look like very similar to a C version, it is a basic req/rep example with a loop around the receiv/send part

    using System.Diagnostics;
    using static nng;
    const int count = 100000;
    if (args.Length == 2)
        switch (args[0])
            case "--server":
                Server(args[1]);
                break;
        foreach (var size in new int[] { 128, 1024, 16384 })
            Benchmark($"ipc:///tmp/SharpNngBenchmarks_{Guid.NewGuid():N}.ipc", size);
            Benchmark($"tcp://127.0.0.1:6001", size);
    static void Benchmark(string ipcName, int size)
        var benchKind = ipcName.Substring(0, ipcName.IndexOf(':'));
        Console.WriteLine($"===========================================================================================");
        Console.WriteLine($"Benchmarking {benchKind} - Data Size = {size}");
        var process = new Process();
        process.StartInfo = new ProcessStartInfo(Process.GetCurrentProcess().MainModule.FileName, $"--server {ipcName}")
            RedirectStandardError = true,
            RedirectStandardOutput = true,
            UseShellExecute = false,
            WindowStyle = ProcessWindowStyle.Hidden,
            CreateNoWindow = true
        process.ErrorDataReceived += server_ErrorDataReceived;
        process.OutputDataReceived += server_OutputDataReceived;
        process.EnableRaisingEvents = true;
        process.Start();
        process.BeginOutputReadLine();
        process.BeginErrorReadLine();
        Thread.Sleep(1000);
        var clock = Stopwatch.StartNew();
        bool processTerminated = false;
            Client(ipcName, 128);
            clock.Stop();
            process.WaitForExit(1000);
            processTerminated = true;
            Console.WriteLine($"{benchKind}: {((double)count) / clock.Elapsed.TotalSeconds} request_reply/s");
            Console.WriteLine($"{benchKind}: {clock.Elapsed.TotalMilliseconds * 1000.0 / (double)count} μs/request_reply");
        finally
            if (!processTerminated)
                process.Kill();
    static void Server(string ipcName)
        nng_socket sock = default;
        long sizeInBytesReceived = 0;
        int result = nng_rep0_open(ref sock);
        nng_assert(result);
            Console.Out.WriteLine($"Server: Starting {ipcName}");
            nng_listener listener = default;
            result = nng_listen(sock, ipcName, ref listener, 0);
            nng_assert(result);
            Console.Out.WriteLine("Server: Listening");
            for (int i = 0; i < count; i++)
                // Receive the buffer
                result = nng_recv(sock, out var buffer);
                nng_assert(result);
                sizeInBytesReceived += buffer.Length;
                // Send the same buffer back
                result = nng_send(sock, buffer.AsSpan());
                nng_assert(result);
                buffer.Dispose();
        finally
            nng_close(sock);
            Console.WriteLine($"Server: Closed ({sizeInBytesReceived} bytes received)");
    static void Client(string ipcName, int size)
        Console.Out.WriteLine("Client: Started");
        nng_socket sock = default;
        int result = nng_req0_open(ref sock);
        nng_assert(result);
        var buffer = new byte[size];
            nng_dialer dialer = default;
            for (int i = 0; i < 10; i++)
                result = nng_dial(sock, ipcName, ref dialer, 0);
                if (result == 0) break;
                Console.WriteLine("Client: dial failed, waiting for server to listen - sleep 100ms");
                Thread.Sleep(100);
            nng_assert(result);
            Console.Out.WriteLine("Client: Connected");
            Console.Out.WriteLine("Client: Sending");
            for (int i = 0; i < count; i++)
                result = nng_send(sock, buffer);
                nng_assert(result);
                result = nng_recv(sock, out var recvbuffer);
                nng_assert(result);
                if (recvbuffer.Length != buffer.Length) throw new InvalidOperationException("Size is not matching");
                recvbuffer.Dispose();
        finally
            nng_close(sock);
            Console.WriteLine("Client: Closed");
    static void server_ErrorDataReceived(object sender, DataReceivedEventArgs e)
        if (e.Data == null) return;
        Console.WriteLine(e.Data);
    static void server_OutputDataReceived(object sender, DataReceivedEventArgs e)
        if (e.Data == null) return;
        Console.WriteLine(e.Data);
              

    The answer to this really buried in the implementation of TCP vs Named Pipes on your system I think. It would be fairly easy to make a sample test that just benchmarked sending 128 byte messages over streams built using these two transport types. My guess is you would find a similar difference. I also suspect if you chose a non loop back address for TCP the results would be different.

    Modern systems have a lot of effort spent in optimizing TCP. I'm not sure that the same effort has been invested in named pipes.

    Reopening as I ran some benchmarks on my Windows 11 x64 machine using:

  • Unix Domain Sockets
  • Plain TCP Sockets (loopback)
  • NamedPipe
  • and here are the results of a simple request/reply sequence of 128 bytes:

    unix domain socket server: 84359.45360719337 request_reply/s
    unix domain socket server: 11.854036 µs/request_reply
    tcp socket server (loopback): 63809.05472181582 request_reply/s
    tcp socket server (loopback): 15.671757 µs/request_reply
    namedpipe server: 114980.23432281874 request_reply/s
    namedpipe server: 8.697147 µs/request_reply
    

    So you can see that namedpipe compare to tcp socket is almost twice faster on my machine...

    The difference with nng is that I don't have a multithreading queue, so I don't pay the cost of the all the kernel switches syncs between threads, + I don't have a complex protocol (just request/reply on the same stream/pipe)

    But in theory, I would expect to save around 7µs using nng+namedpipes compare to the socket version, so there is something going on...

    For the record while I have an interest in this and if it turns out something easy to fix I will do so, I'm probably not super interested in investing a ton in this as it's a very niche use case. Almost nobody uses this transport on Windows and I suspect fewer still in performance sensitive contexts.

    There are a few possible answers:

  • Differences in the context switches needed. (Named Pipes are very different than the others) -- I am using thread context switches and that will make your results somewhat less predictable -- especially if the system under test is doing anything else. (Hint: Windows machines are always doing something else. :-)
  • The message size -- for IPC messages we add a little bit of header and that might disadvantage something that occurs at 128 even bytes. This is probably a stretch, but one could test with different message sizes to see if there is a surprise.
  • It would be good to have more information about how reliable these results are. It might inform some opportunities to improve the performance of this transport -- although as I indicated earlier I'm not aware that folks are using it much in contexts where this level of sensitivity is a concern. (Typically you care about a < 10 us when doing high frequency trading.)