Can you write the code which tells you when you have the no IPv4 “thing” without trying the IPv4 thing on a network which by definition should not have the IPv4 thing where you want people not to try the IPv4 thing?

Response:

We've been here before - I don't think there is any way to write an algorithm that can differentiate between "I can see IPv4 chatter and I should join in because this network is completely IPv6 only yet" and "I can see IPv4 chatter and I shouldn't join in because this network is supposed to be IPv6 only but some other nodes don't understand that". If a key part of the algorithm is "there aren't other IPv4 nodes chattering" then you only need one device on the network to prevent everything else shutting down IPv4.

Low End Box: a web site which presents hosting specials by many different VPS providers: VPS under $10 per month, physical servers under $50 per month. Over-subscription must work well for the casual user. The archives go back to 2008.

2018/10/04 - edgeLinux - provides the ideal platform to run all the Virtual Servers you need, easily and efficiently. By leveraging both virtualization (KVM) and bare-metal containerization (LXC), the user has maximum flexibility on how to get the most out of any hardware.

2018/11/09 - Three ports, even though they are labelled as wan, lan0, and lan1, all three exist on the same Topaz switch. Layer 3 functions on each port will probably work, but trying to make something like Open vSwitch handle packet switching between ports will not work (unless the hardware offload works, which I doubt). The Topaz switch takes care of all mac-learning and line forwarding. The three ports are not independent of each other.

With those steps, I was able to get a successful boot to ensure the board was functional. I used 'screen /dev/ttyUSB0 115200' as a console connection command to the EspressoBIN.

Ultimately, I want a Debian bootable system. So, for experiment #2.... GitHub armbian/build has the instructions for building from scratch. But I opted for instant gratification with downloading and installing a pre-built image. Even though the instructions at armbian espressobin are terse, they are accurate in terms of what is needed to get a (as of this writing) Debian Stretch v4.18.y kernel installed on EspressoBIN. The FAQ describes how to burn an image to an SDCard (in my case Etcher on Linux). I disable NetworkManager.service, NetworkManager-wait-online.service, systemd-networkd and systemd-resolved. I then manually adjust /etc/network/interfaces. Armbian Documentation talks about special optimizations as part of the build.

manpage for systemd.netdev
reads, in the part concerning the MACAddress= entry in the [NETDEV] section: If none is given, one is generated based on the interface name and the machine-id(5)

manpage for dbus-uuidgen, it says that the D-Bus machine-id should not be changed on
a running machine or as they write: it will probably result in bad things happening. The machine-id
should remain constant at least until next reboot.

remove the SD card from the EspressoBin,

mount it on a different Linux system

delete /the/mounting/point/var/lib/dbus/machine-id

generate a random new machine id using dbus-uuidgen –ensure=/the/mounting/point/var/lib/dbus/machine-id

ust for the record: We have a SoC talking via RGMII to the onboard switch (currently with 1GbE but maybe 2.5GbE possible). We use DSA to tell the switch to not act as a switch on layer 2 anymore but separate downstream ports to get 3 individual interfaces just to bridge them again at the kernel layer above. Doing this at the DSA layer (telling the switch to be a dumb layer 2 switch accessible as eth0) might save CPU resources and result in better performance if I'm not wrong?

so it looks like the topaz switch needs to be talking to soc via SGMII instead of RGMII to achieve 2.5G.

..and I think i'm officially convinced that the max bandwidth between topaz switch and cpu is in fact 1gigabit... The connection is using a seperate RGMII lane. (eth0)

eth1 -- would be the interface capable of of SGMII, but the 3 fast lanes are occupied with USB, MiniPCI and SATA

MTD: where the firmware and the u-boot environment are stored. The u-boot-tools package contains some utilities to read/modify the u-boot environment which is much more comfortable then in the interactive u-boot shell.

I/O Related Testing: "keep in mind that we've tested there 'advanced' stuff with mPCIe SATA controllers. On the onboard SATA port EspressoBin should in a single disk configuration easily exceed 500 MB/s (or in other words: Fast enough for any HDD imaginable) "

bootlin (formerly Free Electrons) - it was confirmed that support for the Marvell 3720 security engine is available in the crypto tree of 4.16-rc1 and that some known bugs will be fixed until the final release of 4.16.

I had a quick look over the diff for 4.16-rc1 and it looks like there's progress on support for DVFS too

These problems seem to arise (after some time) if your board is powered simultaneously by two sources with different GND potentials (check DC and AC).
In this case your board will be exposed to severe electrical and thermal strain causing hardware issues after some time.
It can be avoided if you access the serial console using a laptop that is not connected to a power supply itself (see usb/main power ).

Cyclone V SoC Multi-Port Ethernet Aggregator Board - NetLeap - NovTech's NetLeap, an Industry 4.0 Multi-Protocol Ethernet Ports Aggregator Platform, allows the development and integration of a variety of Ethernet protocols including PROFINET, EtherCAT, EtherNet/IP, Ethernet Powerlink, Modbus TCP, SERCOS III and more. This solution is preproduction ready. A total of six 1G/100/10 Ethernet ports are present. Two are connected to the HPS (The ARM* Cortes-A9 core of the Cyclone® V SoC) and four are connected to the FPGA fabric. With its six ports, the NetLeapâ„¢ allows different protocols to reside on the same platform and can be used as a protocol bridge, a switch, or a router. The kit comes with a templated project example that allows the board to boot to Linux and have all six ports work as a standard Ethernet.

2018/10/15 - SM72442 - TI - (ACTIVE) Programmable Maximum Power Point Tracking Controller for Photovoltaic Solar Panels - features a proprietary algorithm called Panel Mode which allows for the panel to be connected directly to the output of your power optimizer circuit. Along with the SM72295 (Photovoltaic Full Bridge Driver), it creates a solution for an MPPT configured DC-DC converter with efficiencies up to 99.5%.

Without knowing where to look, it is hard to find information on performing large scale load balancing without resorting to commercial heavy-metal, heavy-expensive load balancers, or small scale home brew load balancers based upon
HAProxy and VRRP.

HAProxy is a layer 3/4 load balancer service which looks into the packets to determine where to send sessions. For an application I'm working on, I don't need that type of heavy duty evaluation. As this service resides mostly in userland, there is some additional overhead userland/kernel calls.

Load balancing base upon source-ip and/or dest-ip and/or port numbers is all I need. Linux Virtual Server (LVS) seems to fit the bill. It is kernel resident with userland tools for management. The
web-site doesn't look to have recent updates, so the first impression is that it is unmaintained code. But after digging into the mailing lists, it seems to be an actively used service. Wiki and mailing lists talk about many different active/active active/passive scenarios, but none really discuss how to use a number of load balancers actively simultaneously.

At some point, I did see a passing reference to using BGP as part of a load balancing solution. After looking more, I came across
Day 11 - Turning off the Pacemaker: Load Balancing Across Layer 3, starting with the section on "Solving the HAProxy SPoF Problem". Now that is getting closer to a solution which makes sense to me. The article refers to using Bird or Quagga as BGP engines which can manipulate the FIB. The article also refers to using BGP AnyCast as part of the solution. The issue with this basic setup is that with a bunch of load balancers each running BGP, you get into scaling issues with BGP full mesh requirements.

The common solution in the network world full mesh issues is to use route reflectors. BGP or Bird could be used. But I came across
ExaBGP. This is a Python based application which knows how to talk BGP. It doesn't know how to manipulate FIBs. It was designed to fit the role of prefix injection. So it looks like the the ideal candidate of being a route-reflector for managing route injection for assigning traffic to the LVS load balancers in a deterministic, resilient manner. There is a link which talks about
High availability with ExaBGP. Exactly what I needed. Some additional background on the solution type:
Stop Buying Load Balancers and Start Controlling Your Traffic Flow with Software.

2018/01/08: as an update, rather than using the OSPF/BGP/RouteReflector scenario for datacenter dynamic routing, it is possible to use a hierarchy of eBGP peers to handle interior routing: Use of BGP for routing in large-scale data centers. OSPF is not needed. Nor are route reflectors. Something like ExaBGP can still be used to perform route injection for loopback or dummy interface route injection as services come and go. This handles the AnyCast routing scenario with aplomb.

Referring back to LVS for a second, LVS can load balance to services with LVS-DR (direct route), LVS-TUN (one way tunnel to off-subnet services), or LVS-NAT. For me
Virtual Server via Direct Routing appears to be the way to go.

Load balancing works through the interaction of multiple sub-systems. The first level is through round-robin DNS. This provides some flexibility for determining from where services will be serviced. The DNS name may resolve to one or more IP addresses. The next stage makes use of BGP. Each of the IP addresses can be assigned to one or more physical hosts. What this means is that each physical host will advertise that IP address to the edge, and each host provides a unique metric. The host with the best metric receives the traffic for that ip address. With multiple IP addresses in play, with multiple hosts each advertising a subset of those addresses, loads can be balanced across many hosts. If a host goes out of commission, each of its advertised IP addresses is withdrawn manually or automatically from the routing tables, and loads re-adjust automatically to the still active hosts, with hosts with the next best metric automatically picking up that load.

ECMP might also be used for delivering and balancing traffic. This gets into more esoteric traffic management conditions. In any case, LVS resides on each host, and is the final stage of load balancing the traffic across a number services, each of which is a virtualized guest (or physical hosts on huge scale outs).

2016/12/3 update: even though the IPVS page is horribly out-of-date, IPVS appears to be alive and well, as there was a release of ipvsadm v1.29 today, with the userland tools available at IPVSADM git via git.kernel.org.

It has been far too long since the last ipvsadm release. Even-though only two changes to the ipvsadm tool happened since last release, a release must be made as these feature relates to kernel side features.

Support for reading 64-bit stats is avail since kernel v4.1. The new attributes for sync daemon got introduced in kernel v4.3, but got fixed in kernel v4.7.

2018/01/07: The article Day 11 - Turning off the Pacemaker: Load Balancing Across Layer 3 offers up a python program for link testing and using BFD (Bidirectional Forwarding Detection) for advertising and withdrawing routes. The example script uses a separate BFD Daemon. But as I use OVS, I think that I will give
BFD in Open vSwitch a try via the event/monitoring interface. [this isn't really part of the load balancing tooling above, but is an interesting side project related to it]

2018/01/08: Someone suggested dnsdist for load balancing. It has an impressive rule set for customising how query responses are generated.

2018/02/01: In an earlier update, I mentioned BFD for updating routes. When only monitoring link state, something like
ifplugd could be used. ifplugd is an Ethernet link-state monitoring daemon, that can execute user-specified scripts to configure an Ethernet device when a cable is plugged in, or automatically un-configure it when a cable is removed.

2018/10/09 - Pound - is a reverse proxy, load balancer and HTTPS front-end for Web server(s). Pound was developed to enable distributing the load among several Web-servers and to allow for a convenient SSL wrapper for those Web servers that do not offer it natively.

LoRaWAN simply explained -- LoRaWAN stands for Long Range Wide Area Network. It’s a standard for wireless communication that allows IoT devices to communicate over large distance with minimal battery usage.

Raymond P. Burkholder2018-09-30T02:04:50Z2018-09-30T02:04:50Z2018-09-30T03:54:23Zhttp://blog.raymond.burkholder.net/wfwcomment.php?cid=9720http://blog.raymond.burkholder.net/rss.php?version=atom0.3&type=comments&cid=972http://blog.raymond.burkholder.net/index.php?/archives/972-guid.htmlScanning a Local Network For Ipv4 Addresses

Sometimes, you just don't know what is on your network. A couple ways of finding out include using nmap or arp-scan.

An easy install:

sudo apt install arp-scan nmap

And an easy use with some basic defaults (the Unknown ones are LXC containers with manual mac addresses, and Cadmus Computer Systems is a VirtualBox device):

ElioPay - By connecting to Stripe, PayPal and PayBear you can accept credit and debit cards in 135+ currencies, PayPal and 7 cryptocurrencies including Bitcoin, Litecoin and Ethereum (with more to come). [We don't actually process the payments as we connect to Stripe, PayPal and Paybear]

Remember that if your enterprise network has apps that use literals, or they don't support IPv6, you still need dual-stack in the LANs, but access IPv6-only is just fine.

Other comments:

It’s not s difficult concept but you need to remember NAT44 breaks stuff and NAT64/NAT46 breaks more stuff.

Dual stacking is SIMPLE. REALLY. Turn on IPv6 with the M bit set and configure the DHCPv6 server. If you don’t need that level of control of address assignments leave the M bit off. 99.99% of your machines will just add a second address to the interface without you having to do anything more.

Getting to IPv6 resources from IPv4 address is a *much* harder problem that getting to IPv4 resources from IPv6 which is what you are describing here with the “no renumber everything as they already have a IPv4 address” requirement. NAT64 allows IPv6 devices to get to legacy IPv4 servers. To allow IPv4 devices to get to IPv6 servers you need to map the IPv6 addresses you want to talk to in to a pool of IPv4 addresses and push that mapping to a NAT46 (not NAT64) device.

Go dual stack then, once IPv6 is stable, turn off IPv4 if you want to be single stacked. You are then no longer dependent on the services you want want to access continuing to be offered over IPv4. 464XLAT will only work as a stop gap for IPv4 clients while services are offered over IPv4. After ~20 years of IPv6 being available (Windows XP had IPv6 support and it was not the first major OS to have it) just turn on IPv6.

I recommend that eyeball networks don't run any external recursive server for optimal CDN performance. Yes, some CDNs support other methods, but not all. If not all do, then the requirement remains.
See On Firefox Moving DNS to a Third Party

Upcoming changes to the Linux kernel may include add flow_rule infrastructure capabilities in TC's cls_flower dissector. Some of the flow matching performed by Open vSwitch could be / will be offloaded to the kernel. This capability then allows the flow matching to be off-loaded to hardware for specific match / action pairs. I'm not sure how this would integrate into OpenFlows's complicated sequential table evaluations. They are using the bcm_sf2 driver for intial hardware acceleration testing.

agorakit is a web based open source groupware for citizens initiatives.
By creating collaborative groups, people can discuss, organize events, store files and keep everyone updated when needed.
Agorakit is a forum, agenda, file manager, mapping tool and email notifier.

diaspora: instead of everyone’s data being held on huge central servers owned by a large organization, diaspora* exists on independently run servers (“pods”) all over the world. You choose which pod to register with, and you can then connect seamlessly with the diaspora* community worldwide.

matrix: An open network for secure, decentralized communication. There is a Synapse, which is a reference server.

2018/09/25 ManyVerse: is a social network mobile app with features you would expect: posts, threads, likes, profiles, etc. But it's not running in the cloud owned by a company, instead, your friends' posts and all your social data live entirely in your phone. Remarks at Hacker News.