In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.

I have read and understood the above.

Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.

Optical transmission has always played an important role in high bandwidth delivery and even more so with the drastically increasing bandwidth requirements. From a technology point of view the bandwidth increase of Optical transmission equipment always happens in steps (1G to 10G to 100G) The evolution of these steps and how the technology is used always follows the same pattern.
This talk will examine why 100G Optical transmission is entering a new phase within this pattern.

In the recent years we have seen several new IXes pop up in Denmark. In this talk, i'd like to present a current overview as well as a short guide to new peering-networks on how to select between them.

Speaker:
Lasse Jarlskov
(DKNOG)

10:45
→
11:15

Coffee break
30m

11:15
→
12:00

Thoughts on an Open Exchange45m

Exchanging routed information based on AS-numbers is quite a commodity. But can the physical presence of many ISPs and service-organisations be used in other ways to facilitate cooperation?

And which facilities are attractive when considering participating at an exchange point.

This session will try to start a discussion about options, pitfalls and interest in such an extended setup.

Speaker:
Jan Ferré
(DeiC)

12:00
→
13:00

Lunch
1h

13:00
→
13:45

Where The Truth Lies45m

A presentation about how a single source of truth, expressed in an elegant data model, is used to operate an Internet business' process and network automation.

Many automation presentations to date have considered programming techniques/skills/languages a network engineer embarking upon an automation project shall need. Or, concentrated on a vendor's automation features, so that the audience can see the Arista or the Juniper integration options. Little has been produced to date which explains how an engineer will integrate software relevant business processes or product design.

If an IXP (but equally an ISP, a hosting company, etc.) concentrates only on the automation platform facing their network infrastructure, whilst the instruction set to manage the network is automated, without integration into the company's products or customer's requirements, can the company really be said to be automated?

When Asteroid embarked upon a platform that could build and operate fully autonomous peering platforms, it became clear that the automation systems that we build must have a deep integration with the network switches, and the servers that will support the platform, but also the business processes that would be used to create and operate exchanges/port services.

When a company extends the scope of the automation project into the product set, sales process, monitoring there are a number of efficiencies realized:

In Sweden there will be a project running all through 2018 that will act as a pathfinder to figure out how to improve digital content delivery when the pressure on systems is at its absolute peak. Several incidents during 2016 and 2017 shows that we have alot of flaws.

The Swedish Govermental info-channels we have been appointed to use in crisis does not have the resiliance and robustness thats needed when we actually need them.

SUNET and NETNOD has together started a project and received funding and mandate from the Swedish Post and Telecommunication Authority to solve this. With a mix of good hardware, clever routing, cooperation between ISPs and a flexible and secure CDN platform we have a way forward.

This talk will be the first time this is mentioned outside closed doors and we will talk about exactly how we aim to achieve this.

Even with the best routing design and ECMP based topology, traffic and paths not always reflects the needs. The demands to be adaptive and be flexible regards paths usually arrives not long after any new shiny implementation. In the early days much hope was for RSVP and state machine, additions of routing to the IPv6 header but most of all MPLS. This session focus on experiences from working with innovations like Segment-routing, BGP Labeled Unicast and other ways achieve path changes based on statistical inputs from both the network and from the TCP/IP stack.

I'm a maintainer for the open source project Oxidized which is software for backing up network configuration. I'd like to do a lightning talk on an intro to Oxidized covering what it is and what it does + a call out for maintainers to help with the project.

During switch replacement we lost IPv6 access on some VLANs. Investigation provided some interesting insights into Arista VARP as well as some fall-out afterwards. This talk will present Arista VARP quickly, explain our specific problem - configuring only ipv6 virtual router on /some/ routers, and the fun coincidence between other devices and the rest of the network.

In conclusion how will discuss how to ensure operation in the future of this configuration and part of the network.

Geolocating public nodes in the Internet infrastructure is hard. No single approach has been proven successful, nor is any dataset (publicly) available that is accurate enough to be considered reliable.

The RIPE NCC has now launched a new tool, that targets this problem. The tool consists of a web interface and an API. The tool tries to infer the location by combining multiple automated methods and coming up with an overall preferred geo-location for an IP address. One such method uses latency of ping measurements from RIPE Atlas probes to assess the actual distance of the IP address to these probes.

OpenIPmap presents the overall results of the engines and asks for input from the user. Thus a crowdsourced dataset arises that combines both the power of computational based and human analysis.