Author
Topic: tool to query hostnames & devicenames (Read 983 times)

so we have a lot of devices and several hosts distributed around two floors in a building (our new "temporary campus lab"), with both IP on fiber channel and IP on copper wires, a lot of IPs and MACs, and it's really a mess when you have to understand what is behind an IP.

besides, guys usually behave like if it was the CCC (computer chaos campus), they come, sit here or there, plug/unplug cables and sometimes change the hostname, or the IP of a host (portable workstation or laptop), and the host table needs to be updated, otherwise, you are really lost.

it *might* report ... if any service nmap finds shows it in the banner then it'll get it from that it also guesses the OS based on responses to its probes, so with nmap you're not always guaranteed to get a hostname.

But you can get all the open ports!

at the end of the day, a dude *suggested* the use of SNMP!

SNMP is ubiquitous and can be used to monitor many aspects of the host. Nearly every networked device including switches, routers, and access points support it, not just computers.

I really wonder which kind of software and solution is used in events like the VCF East, which should have thousands of IPs connected

The networks I've set up use switch port and MAC based VLANs to divide the network into zones, with DHCP servers logging MAC addresses and IP address assignments. You can write helper scripts to map MAC addresses to machines (noting that you can spoof a MAC address as easily as IP addresses), even at specific times in the recent past (for forensics; log size limits do apply). That way, on a server, you can run a command that lists connected machines and their IP addresses, right now or at any time within dedicated logs; any unknown devices connected (and given access to the "guest" VLAN), and any suspicious devices (based on e.g. network traffic).

Essentially, when a specific MAC address is connected to a specific port on a specific switch, you allow it a specific IP address. It does not mean you trust it to be a specific machine; it is just the working assumption for all non-privileged services.

As I use Linux for this, I typically use what the other admins are most familiar with, but I do like dnsmasq. The rest of the stuff is homebrew, because it depends so much on the particular lab needs.

(As an example, I have an Apache server configuration and administration scheme that has been in active use for fifteen years, optimized for multiple administrators who tend to change; and both common and separate projects, all separately managed. The last time I bothered to update my public documentation of it was in 2007, though; last updates I've done are from Apache 2.4 from four years back. It too has a large human component, mainly to force the server admin to think what they wish to enable, rather than just installing a package with default configuration from public repos. Involves relocating the configuration files as well.)

In a typical lab, when a new machine is added, its MAC is recorded along with machine info. Connecting to any standard LAN socket always gives it the same IP address and name (via DHCP). If a machine needs a fixed IP address, that address is explicitly specified in the DHCP configuration, so it does not matter whether the machine is configured to use DHCP or a fixed IP address, because the IP always ends up the same anyway.

Dedicated machines, like experiments (say, embedded machines and such) and connected to dedicated ports on a switch, and traffic only flows via that link if the machine has a specific MAC. The machines themselves are in locked cabinets, so unauthorized persons like visiting students cannot tamper with them.

The most important security feature is trust. When someone is allowed access, the person is asked to thoroughly read and agree to the security policies, but that's just the bureaucratic part. The human part is more important. That involves explaining that things work this easy because we trust each other to uphold the rules, so that more draconian rules and technical limitations are not needed. That these are the tools we must take care of, so they can do the work we want them to do. If we are selfish, everyone else will become selfish too, stricter rules will have to be implemented, and all work will be that much harder to do.

This works. If you've ever been a sysadmin yourself, you'll notice how most will automatically look away when others are typing in their passwords, even when they have the technical ability to change their passwords to anything they want, and back again; or perform any action as that specific user. That's the human part I'm talking about. Do not trust people who do not develop such mannerisms; they do not operate under the sort of implict human trust based rules I have described above.

The command-line utilities, for the homebrew scripts. I've only managed lab-internal switches, and whatever I or my friends or family have at home.

(Way back in 1999, I had a Linux server acting as a NAT, DHCP, and HTTP proxy, with another serving as a file and print server for three dozen or so Macs and Windows machines; Mars NWE, Netatalk, and Samba on the same volumes. File locking didn't always work right, but then most applications didn't even try to. Even later on, you couldn't have two different users editing a different part of the same site stored on a shared volume in Frontpage or Dreamweaver. Silly applications...)

The last network setup for a lab environment I did was way back in 2004; since then, I've worked mostly in HPC cluster setups and such.

My main annoyance right now is that there are no affordable home wireless routers with both 2.4 and 5 GHz support in OpenWRT/Lede; I've an Asus RT-AC51U, and latest kernels do have the drivers needed, but OpenWRT is lagging. I have some home automation stuff, like an Odroid HC1 mini file server, I'd like to configure and play with. So I am somewhat up to date on the subject in theory (as to how to set up a local network with embedded machines, trusted users, and untrusted users, I mean); the major holes in my knowledge is what existing software projects one could use and adapt for this kind of stuff.

(I do have lots of anecdotal stuff from colleagues still doing this sort of stuff, especially wrt. remote maintenance and so on. I did, for example, create an internal Debian package that allows Uni users to register themselves as local users on a laptop two or three years ago, by logging in via Eduroam (provided by the University via WiFi), and verifying and storing the details (on both the user and the laptop) on a server; then creating the local user account with info matching their AD/LDAP records, and removing the package. It was used to register maybe a hundred laptops, until a more robust solution was done using the Uni-maintained Linux distribution, Cubbli, in the last year or two. I like the Linux-IT team at Uni of H here, and occasionally help out just to keep my edge.)

Do note that just because I said that trusting humans is the first, most important step, does not mean it ends there. I like to go all-out paranoid, not trusting even myself. (Then again, right now I have 89 IP addresses banned by my fail2ban filters on my laptop, because they have attempted to access the machines various services during the last 24 hours or so, so I'd say that is just a healthy approach.) It seems that when I describe the paranoid cases couched in personal experiences and anecdotes saving my butt, they are much more palatable to others. Not always, and not accepted nearly as often as I'd prefer, but it seems like a good approach in general. Trust, but verify.

On servers, I would have liked to implement a transparent configuration file tracker, which uses the process tree to track individual administrators (even across sudo su - , which I heavily discourage for forensics reasons), and records the changes on a separate machine inaccessible to most admins. (I am highly irritated by intelligent people making stupid errors -- I cannot help but expect more of them, even though I have no issues dealing with people with less skills --, and I would have liked that to do some one-on-one discussions on silly, lazy mistakes and carelessness and trying to hide the trails to escape personal blame among colleagues. Bureaucrats do not need to know, unless users' information is misused. Just QC and making admins behave sensibly, really.)

If you want to talk about actual solutions, maybe need help with some maintenance/logging/utility scripts, I'm always happy to help make Linux systems more robust; feel free to email me for example. (My email should be listed in my profile, but if not, you can see it here.)

umm, this is required to be implemented also inside embedded devices running RTOSes, but it's not a problem since SNMP can be configured to use UDP/IP, and the version1 of the protocol is enough for our needs.

this is required to be implemented also inside embedded devices running RTOSes, but it's not a problem

Hah hah. SNMP is pretty piggy. ASN.1 is obnoxious, and the supposedly-required MIBs (the data that SNMP reads) are large and annoyingly organized (with ordering of things that don't need to be ordered.)At least, this was the case last time I paid any attention. Which was quite a long time ago, and perhaps based on standards of computational power that are long obsolete.

If I were to implement an identifying service, I'd write a tiny little daemon in C one could use on both workstations and embedded devices.

At the core, I'd probably use Ed25519, with two public-private key pairs for each client. The server signs each request with that particular clients public key (with the plaintext including both sender and receiver IP addresses, and the requests plaintext limited to max. 384 bytes, to allow up to 128 bytes (1024 bits) for the signature while keeping packet payload under 512 bytes), and the client signs each request with another public key, the private part of which the server already knows. I'd keep the data itself unencrypted; it makes traffic analysis and forensics easier, and the information transferred is intended to be public, but hard to fake.

Because one part of each key is on the client, and the other part on the server, with no machine having both parts, it is easier to keep the whole secure.

(I don't like the idea of using the same key for messages flowing both ways, with the server using the A part of the key part, and the client using the B part. I'm no cryptologist, but when that potential risk is avoided by using a separate key pair per direction, I'd rather do that.)

For RTOS devices, I'd probably have a small embedded local Linux machine as a firewall/gateway to its own LAN, in a secured cabinet like I said earlier. That way, you can do monitoring and control via that Linux machine, and do access control to the RTOS devices via e.g. SSH tunneling (so you only get access, if you have SSH access to that frontend Linux machine).

In particular, if you want real-time monitoring and control, you can then use USB-connected microcontrollers rather than full-blown RTOS devices. That's what I am doing in one experiment, that requires some temperature and pressure sensors, actually. The idea is that you design the control loop to be contained in the microcontroller, so that any delay in the USB communications is irrelevant; and you only pass status/state information and new control parameters via USB.

If I were to implement an identifying service, I'd write a tiny little daemon in C one could use on both workstations and embedded devices.

we already have one, and it's in working condition, but since we got *suggested* not to be NIH, I am looking around for already implemented solutions.

Quote

NIH(urban dictionary):

NIH stands for Not Invented Hear Open Source/Free Software; developers are suffering from a severe form of NIH. Everybody thinks other people's solutions suck, and their solution is the solution to end all solutions.

the funniest sentence "RTOS devices are leftpad, which is the ultimate antithesis to NIH" (quote)

Everybody thinks other people's solutions suck, and their solution is the solution to end all solutions.

No, for example Apache and Nginx are perfectly good solutions, as is dnsmasq, if configured properly. That should be proof enough that I do not suffer from NIH as you seem to imply.

I am only saying that if I were you, I'd approach the problem from a different direction. The first level of security would be human; the second level would be network configuration based on VLANs and MAC addresses; and the third level, if deemed appropriate, like you seem to have deemed it, would be a small service using elliptic curve cryptography (that I'd publish under GPL and push to Debian if accepted), because no such thing exists yet, but might be useful in lab-type environments requiring lightweight network device authentication. Fourth level, involving remote access by specific admins to specific devices, would be via key-based SSH, with the machines having the private keys severely restricted and under lock and key and not on, say, sysadmins workstations. If they need access, they can login to the secure machine and do the remoteing from there. Leaves a traceable log that way, too.

The transparent configuration file trackers do not exist yet on Linux either, btw. When I researched the approach, fanotify was not available yet, so the only alternative approaches were file leases and the audit subsystem. Just because I am just an anonymous voice on a message board does not mean I do not know exactly what I'm talking about.

I do recommend the Ed25519 for digital signatures. To implement it, you need SHA512 and Curve25519 elliptic curve stuff. I haven't implemented it myself, but that's what I'd go for anything asymmetric. DJB's reference Curve25519 implementation is an excellent point to start with.

I mght have missed the point spectacularly but what's wrong with nmap?

Quote

it *might* report ... if any service nmap finds shows it in the banner then it'll get it from that it also guesses the OS based on responses to its probes, so with nmap you're not always guaranteed to get a hostname.

But you can get all the open ports!

our initial need was mapping the IP to the hostname of each remote device on a sublan.

Indeed, whenever it's available, e.g. CISCO routers implement SNMP so they can reply, other device *might* not have anything implemented. One of the points was exactly this. It's not a common implemented service on Linux, neither what you find on embedded devices.

Besides, nmap looks 10X slower than our tool at mapping a sublan; the reason is simple: our tool is focused on a simpler purpose with a small protocol that only transports the information we need, so it's also better than SNMP.

Anyway, we have to rethink about the foundations of this idea concerning security and organization.