Release Engineer (Berlin, Germany), Sony Interactive Entertainment

Do you want to be part of an engineering team that is building a world class cloud platform that scales to millions of users? Are you excited to dive into new projects, have an enthusiasm for automation, and enjoy working in a strong collaborate culture? If so, join us!

Responsibilities

Design and development of Release Engineering projects and tools to aid in release pipeline.
Work in cross-functional development teams to build and deploy new software systems.
Work with team and project managers to deliver quality software within schedule constraints.

Requirements

Demonstrable knowledge of distributed architectures, OOP and Python
BS or a minimum of 5 years of relevant work experience

SRE / DevOps

Do you want to be part of an engineering team that focus on building solutions that maximizes use of emerging technologies to transform our business to achieve superior value and scalability? Do you want a career opportunity that combines your skills as an engineer and passion for video gaming? Are you fascinated by technologies behind the internet and cloud computing? If so, join us!

As a part of Sony Computer Entertainment, Gaikai is leading the cloud gaming revolution, putting console-quality video games on any device, from TVs to consoles to mobile devices and beyond.

Our SRE's focus is on three things: overall ownership of production, production code quality, and deployments.

The succesfull candidate, will be self-directed and able to participate in the decision making process at various levels.

We expect our SREs to have opinions on the state of our service, and provide critical feedback during various phases of the operational lifecycle. We are engaged throughout the S/W development lifecycle, ensuing the operational readiness and stability of our service.

Requirements

Minimum of 5+ years working experience in Software Development and/or Linux Systems Administration role.
Strong interpersonal, written and verbal communication skills.
Available to participate in a scheduled on-call rotation.

Skills & Knowledge

Proficient as a Linux Production Systems Engineer, with experience managing large scale Web Services infrastructure.
Development experience in one or more of the following programming languages:

While working with Ansible since a couple of years now and working with LXD as my local test environment I was waiting for a simple solution to create LXD containers (locally and remote) with Ansible from scratch. Not using any helper methods like shell: lxd etc.

So, since Ansible 2.2 we have native LXD support.
Furthermore, the Ansible Team actually showed some respect to the Python3 Community, and has implemented Python3 Support.

Create your inventory file

Imagine, you want to create 5 new LXD containers. You can create 5 playbooks to do it, or you can be smart, and let Ansible do it for you.
Working with inventory files is easy, it's simply a file with an INI file structure.

Let's create an inventory file for new LXD containers in ~/Projects/git.ansible/lxd-containers/inventory/containers:

delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.

raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.

register: ...: during execution of the raw: ... command, Ansible is able to catch the output (stdout, stderr) and the result code of the raw: ... command. register: ... will define a "variable" to store this result. Normally this "variable" is a Python/JSON dictionary for a particular host, but as we are iterating through the 'containers' inventory group, this 'variable' has a results array (which we will use in the next task), where Ansible stores all outputs of all hosts checks. During the task execution but, this 'variable' is still usable as a single result set.

failed_when: ...: this will stop the task, if the registered 'variable' is not accessible or the return code is not 0 or 1 (so command returned no success or no real fail, but something else). (more documentation you can find here)

changed_when: false: so whenever this tasks runs, it will always change it status, and this would mean Ansible would report one change (i.e. return code changed). To prevent this, we set this to false.(more documentation you can find here)

with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops.

Install Python2 if it is not installed in the container

delegate_to:...": this key tells ansible to not use the default connection anymore, but to delegate the connection and the work to the host mentioned in delegate_to.

raw:...: This key advises Ansible to use the raw module. Raw means, we don't actually have anything running, no Python for example, which we need for Ansible. So it just using an SSH connection (by default) or for now, it's using a local LXD connection (like lxc exec <container-name> -- <command>). In this case we are executing dpkg -s python, we want to find out of if Python2 is installed.

when: ...: this is a conditional. It says, that this task only executes when the codition is met. In this case when the return code equals to 1. This is true when the Python2 install check returned, that Python2 was not installed.

with_items: ...: this is one of the many Ansible loop statements. In this case, we are looping over the Inventory Group 'containers' (which we defined in the inventory file earlier).

The "{{item}}" will be prefilled by the loop from with_items:..., again a hint to read the good documentation of Ansible about loops. In this case, we are looping through the result sets of the Python2 install check and the collected results in the 'variable' python_check_is_installed.

Some more informations

In the playbook and in the first task (create LXD containers) we used the a local connection, which means nothing else than Ansible should work on your local workstation.
Inside the Inventory INI file there is this key/value pair: ansible_connection=lxd.

So when the two other tasks who were delegated to the created containers, Ansible would normally use an SSH connection attempt (when you remove the ansible_connection=lxd). With this special configuration in the Inventory INI file it won't try to use SSH towards the containers, but the local LXD connection.

Thanks for 15.04 and all the other releases before (especially the LTS ones).

I think during the last 10 Years, Ubuntu made a difference towards the Linux Community,

When I joined this journey, Ubuntu was just another distribution, with a SABDFL who was pumping a lot of
money into his free project. I guess it was his private money, and the whole Linux community should be so
thankful to this Geek.

Without Marks engagement, I don't think that Linux on the Desktop is so known
to the wider public.

Don't get me wrong, we had SuSE, we had Red Hat, we had Debian (and other smaller Distros), but most of the
global players today were famous for the involvement on the servers (Well, not SuSE because they were focused on Desktop
before they lost track and made the wrong turn [and no I am not saying openSuSE this is a different story)

10 Years ago, actually 10 years and a couple of months, a small group of people were working on an integrated desktop environment,
based on GNOME. And they were right to do so. Those people, many of them still are doing their Job at Canonical, were right to
invest their time into that.

And look, where are we today! On the Desktop, on the server, in the middle of the cloud and on a freaking Phone!

Who thought about this 10 and half years ago?

Yeah, I know, there were some decisions which were not so Ok for the community, but honestly, even those wrong decisions were
needed. Without wrong decisions we don't learn. Errors are there to learn from them, even in a social environment.

To make my point, I think it's important to have one public figure, to bring a project like Ubuntu forward. One person who
directs all fame and hate towards him, and especially Mark is one of those figures.

Just see other huge OpenSource Projects, like OpenStack or Hadoop. Great projects, I give them that, but there is no
person who drives it. No Person who is making decisions, where the project has to go. That's why OpenStack as stock OpenSource project
is not a product. Hadoop, with all its fame, is not a product out of the box.

Too many companies do have a say. That's why, i.e. it's far from practical to install OpenStack from Source and have a running Cloud System.
This is wrong, and those Communities, they need someone who has the hat on to say where these Communities are moving forward.

Democracy is good, I know, but in some environments Democracy blocks innovation. Too many people, too many voices, too many wrong directions.
Just see the quality of Ubuntu Desktop, pre-installed on Dell Workstations or Laptops? That's how you do it. You concentrate on Quality, and
you get your Vendors who will ship your PRODUCT!

Let's see:

We have nowadays Ubuntu as Desktop OS (with Unity as Desktop)

We have Ubuntu as a Server OS, running on many uncounted bare metal machines.

We have Ubuntu as a Cloud OS, running on many, many Amazon instances, Docker instances and eventually Rackspace Instances.

But Ubuntu is more. The foundation of Ubuntu is driving many other Projects, like:

Kubuntu (aka the KDE Distro of Choice)

Ubuntu GNOME Remix

Ubuntu with XFCE, etc.

Mint Linux

Goobuntu

etc.

All those derivatives are based on the Ubuntu Foundation, made and integrated and plumbed by so many smart and awesome people.

Thanks to all of You!

So what now?

Mobile is growing. Mobile first. Mobile is the way to go!

Ubuntu on the Phone is not an idea anymore, it's reality. Well done people. You made it!

But Ubuntu can even do more. Let's think about the next hype.

Hype like CoreOS.

A Linux OS which is image based, no package management, just driven my some small utilities like systemd, fleetd and/or etcd.

CoreOS is one of the projects, I am really looking forward to use. But, I really want to see Ubuntu there.

And yes, there is Ubuntu Snappy....so why not trying to use Snappy as CoreOS replacement?

There is Docker. Docker is being used as the Dev Util for spinning up Instances, with specialised software on it.

Hell, Stephane Graber and his Friends over at the Linux Container Community, they have LXD!
LXD driven by Stephane and his friends. Stephane is working for Canonical. So, I say: LXD is a Canonical Project!

And what is Canonical? Canonical is a major contributor to Ubuntu. I want to see LXD as the Docker Replacement, with more
security, with more energy, with better integration into Cloud Systems like OpenStack and/or CloudStack!

To make a long story short, Ubuntu is one of those Projects, which are not going away.

Even with Mark (hopefully not) retiring, Canonical will be the driving force. There will be another Mark, and that's
why Ubuntu is one of the driving forces in our OpenSource Development. Forget about Contributor Licenses, forget about
all decisions which were wrongly made.

We are here! We don't go away! We are Ubuntu, Linux for Human Beings! And we are here to stay, whatever you say!
We are better, we are stronger, we are The Borg! ^W ^W ^W ^W forget this, this is a different movie ;)

And if you ask: "Dude, you are saying all this, and you were a member of this Project, where is your CONTRIBUTION!?!?"

My Answer is:

"I bring Ubuntu to the Business! I installed Ubuntu as Server OS in many Companies during the last couple of years.
I integrated Ubuntu as SupportOS in companies where you don't expect it would run and support Operations or Service Reliability Departments.
I am the Ubuntu Integrator and Evangelist you won't see, hear or read (normally). I am the one of the Ubuntu Apostles, who are not bragging,
but bringing the Light to the Darkness"

;-)

PS: Companies Like Netviewer AG, Podio (Both Belong now to Citrix Inc.) and Sony/Gaikai for their PlayStation Now product

export CXXFLAGS='-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.8'
export LDFLAGS=-lc++
Now just do this:
`python ./setup.py install`
And wait !
(Some words of advise: When you are installing boost from your OS, make sure you are using the python version which boost was compiled with)

Luck ;)

Means, if this doesn't work, you have to ask Google.

Now, how does it work?

Easy, easy, my friend.

The question is, why should we use JavaScript inside a Python tool?

Well, while doing some crazy stuff with our ElasticSearch cluster, I wrote a small python script to do some nifty parsing and correlation. After not even 30 mins I had a commandline tool, which read in a YAML file, with ES Queries written in YAML Format, and an automated way to query more than one ES cluster.

No, this is not really what we are doing :) But I think you get the idea.

Now, in this example, we have 3 different ElasticSearch Clusters to search in, and all three have different data, but all are sharing the same Event format.
So, my idea was to generate reports of the requested data, but eventually for a single ES Cluster, or correlated over all three.
I wanted to have the functionality inside the YAML file, so everybody who is writing such a YAML file can also add some processing code.
Well, the result set of an ES search query is a JSON blob, and thanks to elasticsearch.py it will be converted to a Python dictionary.

Well, when you ever wrote Front/Backend Web Apps, you know it's pretty difficult to write Frontend Python Scripts which are running inside your browser. So, JavaScript here for the rescue.
And everybody knows how easy it is, to deal with JSON object structures inside JavaScript. So, why don't we use this knowledge and invite users who are not familiar with Python, to participate?

Now, think about an idea like this:

title:name:"Example YAML Query File"esq:hosts:es_cluster_1:fqdn:"localhost"port:9200es_cluster_2:fqdn:"localhost"port:10200es_cluster3:fqdn:"localhost"port:11200_indices:-index:id:"all"name:"_all"all:true-index:id:"events_for_three_days"name:"[events-]YYYY-MM-DD"type:"failover"days_from_today:3-index:id:"events_from_to"name:"[events-]YYYY-MM-DD"type:"failover"interval:from:"2014-08-01"to:"2014-08-04"query:on_index:all:filtered:filter:term:code:"INFO"events_for_three_days_:filtered:filter:term:code:"ERROR"events_from_to:filtered:filter:term:code:"DEBUG"processing:for:report1:|function find_in_collection(collection,search_entry){for(entryincollection){if(search_entry[entry]['msg']==collection[entry]['msg']){returncollection[entry];}}returnnull;}function correlate_cluster_1_and_cluster_2(collections){collection_cluster_1=collections["cluster_1"]["hits"]["hits"];collection_cluster_2=collections["cluster_2"]["hits"]["hits"];similar_entries=[];for(entryincollection_cluster_1){similar_entry=null;similar_entry=find_in_collection(collection_cluster_2,collection_cluster_1[entry]);if(similar_entry!=null){similar_entries.push(similar_entry);}}result={'similar_entries':similar_entries};return(result)}varresult=correlate_cluster_1_and_cluster_2(collections);// this will return the data to the python method result resultoutput:reports;report1:|{%forsimilar_entryinsimiliar_entries%}{{similiar_entry.msg}}{%endfor%}

(This is not my actual code, I just scribbled it down, so don't lynch me if this fails)

So, actually, I am passing a python dict with all the query resulsets from the ES clusters (defined at the top of the YAML file) towards a PyV8 Context Object, can access those collections inside my JavaScript and return a JavaScript HASH / Object.
In the end, after JavaScript Processing, there could be a Jinja Template inside the YAML file, and we can pass the JavaScript results into this template, for printing a nice report.
There are many things you can do with this.

Again, just wrote it down, not the actual code, so dunno if it really works.

But still, this is pretty simple.

You can even use JavaScript Events, or JS debuggers, or create your own Server Side Browsers. You can find those examples in the demos directory of the PyV8 Source Tree.

So, this was all a 30 mins Prove of Concept, and last night I refactored the code and this morning, I thought, well, let's write a real library for this. So, eventually there is some code on github over the weekend. I'll let you know.

Oh, before I forget, the idea of writing all this in a YAML file came from work with Junipers JunOS PyEZ Library, which has a similar way. But they are using the YAML file as description for autogenerated Python Classes. Very Nifty.

The Author, Jono Bacon, is a long standing colleague of mine,
while working on the Ubuntu project.
I am not, in any way, affiliated with his employer (Canonical),
and sometimes (not all the times) I really don't share
his views and/or opinions.

Personal, I see him as a friend, not a close one, but more like 'Brothers in Arms'.
We share the passion of OpenSource and we do like Ubuntu OS, Heavy Metal and Pints of Beer.
And especially we like to be a Dad of the most adorable and awesome Sons, we ever wished for.

I owe him a lot, because he (and some other community members, but he in particular) pulled
me back into the Ubuntu Business a couple of years ago, and I am very thankful for this.

When Jono revealed his new writing 2 days ago, I started directly
to read it, because, believe me or not,
I was wondering if he was refering to me to some extend,
because I can be exact the same guy who he pictures in his latest book.
The disrespectful, the ranting and rambling guy, the angry 'OpenSource' guy,
who sits too many hours per day in front of the computer, and reads a lot of nonsense
from people who think they are the smartest guys on this planet.

Someone, who is passionate, angry and full of ramblings
when it comes to some positions in our technical world,
and sometimes speaks up, too loud.

Thankfully, he chose other examples, but I found myself in his book,
which is not really charming.

Well, honestly, Jono hit 'Bulls Eye' with his detailed description, between the various
aspects of how to read the different comments, responses or posts in our technical world.

His statement

"The trick here is to determine the attributes of the sender and the context."
(PDF, Page 8, 'Dealing with Disrespect')

is the essential message (he extends this later to the four important 'ingredients' sender,
content, tone, context).

Old Internet people like me, who still know the 'UseNet', we know how hard this can be.
How many times, we read UseNet Posts, which were in our eyes and ears unacceptable, bollocks or insane,
and we hit the 'Reply' button in our Newsreader and flamed this poor guy, we didn't even know personally.

In these days, we never thought about the other guy, we just flamed, we insulted on a very personal level,
but, believe me or not, it also came back, like a boomerang, and it really escalated.
But these were those days, we all had leather as skin, and we could swallow a lot.

Today, world has changed, especially we don't use the UseNet so often anymore, and our 'ramblings' can be
found on Weblogs and in the 'Comment' section of those or on Web-Forums.
What and how we are saying, writing, commenting nowadays is more publicly exposed than 20 years back.
The people got softer, we are trying to be more friendly to each other, we are using mostly a
conjugation of the word 'Good', even to say, that something was really bad.

What was missing all the time, was a guide, on how to deal with those, who are not 'nice',
who are not socially well conditioned, people who don't speak the political correct english/language of choice.

Until now.

Now, Jono wrote exactly this missing guide. On how to deal with those people.
And Jono just didn't write about it, he has the experience, working as 'The Community Manager' of Ubuntu.
He already dealt with those. He knows what he is/was writing about

And he knows, that not all of these people are anti-social, hateful or disrespectful.

Many of those people are smart, and in real life really friendly people.
It just needs some experience to deal with them, and Jono gave us now the right guide to learn from it.

I really beg you, to read this little guide of Jono, because you can learn from it.
If you are Community Manager, or you have to deal with a very loud community, or even when you are
the rambling guy. It's worth a read. A lot to learn and to understand.

This book finally tries to solve issues, which can't be fixed technically.

And thanks to Jono, I hope it will make the technical messsed up world a little more
enjoyable.

I am running Trusty Tahr for a long time now, while it was still in development on my workstation.
And it's one of the best releases so far.

Even during development only some glitches were encountered, but were easily workarounded, and this is actually pretty amazing.

When you followed Ubuntu for some years now (and to some extend also invovled in pushing software to it), you know that this wasn't always the case.

We had a couple of really serious hickups, but this release was very handsome. I think Canonicals push towards automated QA and the upload pocket behaviour change were the right things to do.

Thanks Guys, for delivering this amazing release. You really can celebrate and drink a lot of booze and have a good meal (well, now that Jono is the definitive Ubuntu Smoker King, he could serve some delicious pulled pork or whatever he is able to smoke ;))

Again, thank you, you all know who you are. You guys are amazing. Rock On!