Octoprint is a great web frontend for 3D printers. Octopi is a raspbian-based image for a Raspberry Pi that comes with everything you need set up and configured.

Octoprint is an extremely convenient way to manage your 3D printer. However, it’s capable of a lot of spooky things:

If you have them, provides access to webcams showing prints

Can set temperatures of both the tool and the heatbed

Start whatever print you feel like

Control steppers

In the best case, Octoprint gives whoever can access it the ability to see into your house and what’s going on with your printer. In the worst case, someone with malicious intent could burn down your house, or at least wreck your printer.

The smartest approach here is probably to put Octoprint on a trusted network and refrain from poking holes in your router to allow access from the Internet.

But I’m not that smart.

In this post I’m going to outline a couple of things I did that make me feel better about exposing my Octoprint instance to the Internet.

Execute the above command on boot. I accomplished this by putting it in /etc/rc.local.

Now Octoprint should be available on the nginx server via port 25000. Same deal for the webcam feed on 28080 (I have another webcam accessible via 28081).

Note that these should be bound to loopback because of the way the tunnel is set up. No point in all of this noise if that’s not the case.

Make ’em accessible

Now we can go about this if it were a standard reverse proxy setup. The backends are accessible by loopback on ports local to the nginx server.

You can set up authentication however you like. It’s probably easy and safe to use TLS, HTTP auth, and something like fail2ban.

I like client certificates, and already had them set up for other stuff I run, so I’m using those.

This is my config:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

map$http_upgrade$connection_upgrade{

defaultupgrade;

''close;

}

upstreamoctopi_camera1{

server127.0.0.1:28080;

}

upstreamoctopi_camera2{

server127.0.0.1:28081;

}

upstreamoctopi_backend{

server127.0.0.1:25000;

}

server{

listen80;

listen81;

server_name octopi.mydomain.com;

return301https://$host$request_uri;

}

server{

listen443ssl;# managed by Certbot

server_name octopi.mydomain.com;

error_log/var/log/nginx/octopi.mydomain.com/error.log info;

access_log/var/log/nginx/octopi.mydomain.com/access.log;

#.... bunch of SSL jazz auto-generated by certbot .....

proxy_buffering off;

proxy_redirect http:// https://;

proxy_set_headerX-Real-IP$remote_addr;

proxy_set_headerX-Forwarded-For$proxy_add_x_forwarded_for;

proxy_set_headerX-Forwarded-Proto$scheme;

proxy_http_version1.1;

proxy_set_header Upgrade$http_upgrade;

proxy_set_header Connection$connection_upgrade;

proxy_set_header Host$host;

# I found this necessary in order to be able to upload large-ish gcode

# files.

client_max_body_size1G;

location/webcam/{

proxy_pass http://octopi_camera1/;

access_by_lua_file/etc/nginx/scripts/sso.lua;

}

location/camera2/{

proxy_pass http://octopi_camera2/;

access_by_lua_file/etc/nginx/scripts/sso.lua;

}

location/{

proxy_pass http://octopi_backend;

access_by_lua_file/etc/nginx/scripts/sso.lua;

}

}

What’s this access_by_lua hocus pocus?

I covered this in a previous post. The problem is that modern web applications don’t really play nicely with client certificates, and this seemed to include Octoprint. There’s a bunch of wizardry with web sockets and service workers that don’t send the client cert when they’re supposed to.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC. When these cookies aren’t present, nginx redirects to a domain that requires the client certificate. If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.

See the aforementioned post for more details.

Goes without saying, but…

The Raspberry Pi itself should be secured as well. Change the default password for the pi user.

In a previous post, I detailed a trick to get complicated webapps working with client certificates.

The problem this solves is that some combination of web sockets, service workers (and perhaps some demonic magic) don’t play nicely with client certificates. Under some circumstances, the client certificate is just not sent.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC. When these cookies aren’t present, you’re required to specify a client certificate. When a valid client certificate is presented, HMAC cookies are generated and dropped. If the cookies are present, you’re allowed access, even if you don’t have a client certificate.

This has worked well for me, but I still occasionally ran into issues. Basically every time I started a new session with something requiring client certs, I’d get some sort of bizarre access error. I dug in a little, and it seemed like the request to fetch the service worker code was failing because the browser wasn’t sending client certificates.

This led me to double down on the HMAC cookies.

Coming clean

When I call this Single Sign On, please understand that I really only have the vaguest possible understanding of what that means. If there are standards or something that are implied by this term, I’m not following them.

What I mean is that I have a centralized lua script that I can include in arbitrary nginx server configs, and it handles auth in the same way for all of them.

The nitty gritty

Rather than using HMAC cookies as a fallback auth mechanism and having “ssl_verifiy_client” set to “optional,” I do the following:

If HMAC cookies are not present, nginx redirects to a different subdomain (it’s important that it’s on the same domain). This server config requires the client certificate.

If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL. The cookies are configured to be sent for all subdomains of a given domain.

Now that the client has HMAC cookies, it’s allowed access. If the cookies were present to begin with, the above is skipped.

The setup has a couple of pieces:

An nginx
server for an “SSO” domain. This is the piece responsible for dropping the HMAC cookies.

A lua script which is included everywhere you want to auth using this mechanism.

This is the SSO server config:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

server{

listen80;

server_name sso.mydomain.com;

return301https://$host$request_uri;

}

server{

listen443ssl;# managed by Certbot

server_name sso.mydomain.com;;

error_log/var/log/nginx/sso.mydomain.com;/error.log info;

access_log/var/log/nginx/sso.mydomain.com;/access.log;

#....bunch of stuff generated by certbot....#

ssl_client_certificate/etc/ssl/ca/certs/ca.crt;

ssl_crl/etc/ssl/ca/private/ca.crl;

ssl_verify_client on;

location/{

access_by_lua_file"/etc/nginx/scripts/sso.lua";

}

}

And the SSO lua script:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

---

-- SET THIS TO SOMETHING RANDOMLY GENERATED!

--

-- Make this file only readable by the nginx process, and keep it away from web roots.

I recently finished a project which tied together a bunch of different tinkering skills I’ve had a lot of fun learning about over the last couple of years.

The finished product is this:

It shows me:

The time

Weather: current, weekly forecast, readings from an outdoor thermometer, and temperature in the city I work in.

Probably most usefully — the times that the next BART trains are showing up.

Obviously the same thing could be accomplished with a cheap tablet. And probably with way less effort involved. However, I really like the aesthetic of e-paper, and it’s kind of nice to not have yet another glowing rectangle glued to my wall.

I’m going to go into a bit of detail on the build, but keep in mind that this is still pretty rough around the edges. This works well for me, but I would by no means call it a finished product. 🙂

Depending on where you get the components, this will run you between $40 and $50.

Hookup

The e-Paper display module connects to the ESP32 over SPI. Check out this guide to connecting the two.

I chose to connect using headers, sockets, and soldered wires. This makes for a more reliable connection, and it’s easier to replace either component if need be. I cut the female jacks from the jumper bus that came with my display and soldered the wires onto header pins. I then put a whole mess of hot glue to prevent things from moving around.

Firmware

I’m using something I wrote called epaper_templates (clearly I was feeling very inspired when I named it).

The idea here is that you can input a JSON template defined in terms of variables bound to particular regions on the screen. Values for those variables can be plumbed in via a REST API or to appropriately named MQTT topics.

The variables can either be displayed as text, or be used to dynamically choose a bitmap to display.

My vision for this project is to have an easy to use GUI to generate the JSON templates, but right now you have to input it by hand. Here is the template used in the picture above as an example.

Configuration

The only variable that the firmware fills in for you is a timestamp. Everything else must be provided externally.

I found Node-RED to be a fantastic vehicle for this. It’s super easy to pull data from a bunch of different sources, format it appropriately, and shove it into some MQTT topics. The flow looks like this:

Enclosure

I designed a box in Fusion 360. I’m a complete newbie at 3D design, but I was really pleased with how easy it was to create something like this.

The display mounts onto the lid of the box with the provided screws and hex nuts. The lid sticks to the bottom of the box with two tabs. The bottom has a hole for the USB cable and some tabs to hold the prototype board in place.