People are getting older, software or technology specifications don't.

When we run closed source product we need to pay people to maintain it, it is ok when we have pockets filled with money to invest in young students that could handle code after elders. If we can afford car service, returns, repairs and so on.

It's good when lead developer change his job we could hire consultants and have money so we can fix or rewrite products from scratch.

It's ok if we have money to upgrade obsolete hardware, libraries, operating systems. But usually product dies with the team of developers and is replaced with new technology stack.

examples:

20 years ago right after Hallowen Documents leak who would say that Micro$oft will run Linux on their cloud.

COBOL was a great language at it times but as time passed no one was going to work in it anymore so it's a huge problem nowadays.

Software complexity increase.

Ten years ago AWS was 2 years old and there was not much people using it.

Virtualbox got released year before, VMware or Citrix just started acquiring companies in the virtualisation business.

iPhone was one year old and first version of Android was just released.

Now we got multiple cloud providers with their products and certifications. Multi cloud orchestrators, containers with lightweight operation systems.

Who knows what software we would have 5 years from now. But yesterday desktop computers are tomorrow phones and more computing power means more complex systems that can handle things like distributed networking, blockchain, computer vision or machine learning stuff.

There is a price with technology that people pay. Software price is that there is always lack of early adopters or contributors.
Increase of software adoption increase demand for software developers and software development is only a skill not a superpower so we see lot's of coding camps, coding tutorials. Simple tasks can be handled by newcomers and more experienced developers are in even higher demand. Loyality is a big thing in todays world.

With complex software we need more people maintaining it and more people means either more money or more freedom. Free software which is maintained and developed over the years with small number of people is winning with multimillion dollar corporations and that's the power journalists, managers, business investors or even politics don't understand.

Politics are so scared they try to regulate something that is beyond their knowledge with methods they know from their childhood. But it won't work because we don't live in 80's or 90's and those are laws that politics are trying to apply to technology. But let's not focus or those poor bastards.

There won't be cloud, big data, blockchain, artificial intelligence without open source because of it's complexity. And because complexity require lot's of people most disruptive technologies are developed in foundations. Mozilla, Apache, Cloud Native, Linux, Python foundation to name a few. We got portals that drive innovation and help developers to found rise their projects.

If someone tries to close opensource project or some people decide to drive it the other way there is fork like
hundson -> jenkins,
mysql -> mariadb,
khtml -> webkit,
openssl -> libressl,
ncsa httpd -> apache server,
openoffice -> libre office.
Even cryptocurrencies are forked like bitcoin -> bitcoin cash.

It's more natural like evolution of species or extinction of others which didn't adopted to environment changes.

Despite many crazy things people made trough history freedom was always priority. USA are telling their citizens that they fight for foreigner freedom not oil or world dominance and it works. Most of great wars were because someone was telling other people that they would get more freedom when they attack other nations. Without technology freedom was more money, land, skins, heard.

Now it's easier but there are still cyber wars like ip or website block, DDOS attacks, security breach or surveillance.

Still all of those are driven by open technology because of shorter step learning curve. Ten years before when our washing machine broke we called mechanic who could help or not if he got no skills. Today those skilled mechanic are making youtube videos, people are selling parts of their broken equipment on auctions so you can replace your broken parts by yourself.

We are nowhere near the ideal world but if we follow the open technology path we would change and maybe even save this world for other people.

There is still lot's of things to do especially in old business like open buildings, open cars, open electric power source, open electric devices, home appliance, phones. Open hardware should be next big thing and it's happening with Arduino, RaspberryPi, or RISC-V. The problem are regulations but those would probably start to expire with death of today's 40-60 years old people since they would stop protecting their money. I hope newborn people would be more liberal and praise invention, technology and openness instead of regulations that are protecting corporate business.

We see digital revolution in music or tv with services like Spotify or Netflix.

We are living in interesting times, where access to knowledge is not limited to universities and rich people, world is shaped by individuals and collective movement. I hope this trend would not change.

]]>I am very busy recently. There is not much time left for me to sleep and also so not much time to do something creative in my free time cause I am to tired looking at the computer. But here it is some story from today and from the past.]]>https://vane.pl/side-project-invoice-generator/2f146b20-a2cf-43bd-8dfc-2059f28b1ff0Wed, 20 Jun 2018 22:58:00 GMTI am very busy recently. There is not much time left for me to sleep and also so not much time to do something creative in my free time cause I am to tired looking at the computer. But here it is some story from today and from the past.

Day before I started working as sole trader I was looking for some simple open source software for invoice generation.
There wasn't any so I took excel template from my friend but then I realised I don't have excel but I can create invoice software on my own, cause software is what I do and I am software developer, and I did it. It was very crazy idea, one of many crazy ideas I got in my head.

It took me couple of hours, or days I don't remember it was more then three years ago, but it was working. It is still working, and I am using it for almost three years.
At that time it was using alpha version of Apache PDFBox. And this library was amazing at this time and probably now is more amazing, but I didn't do more with it except updating my maven dependencies to some of the latest versions.

Going back to the story every time I was generating invoices I had to modify source code, it was not so painful since number of invoices I make is low, but I always need to calculate days by hand and so on.

Time passed, new technologies appeared like graphql, vuejs, docker so why don't I use it as a base and make some modern shit around this to learn something new.

So recently I revived this project, and I put it on gitlab so you can check it out. Don't worry it's not so scary now. Also at the end of this writeup you can click on online demo - currently only in polish, cause I am from Poland and invoicing polish customers.

Well so what's so special about it now ?

I added some more services to it, while continuous learning I tried to use some recent technologies.
1. There is this java application but now it has simple http endpoint that receive json and output pdf in response
2. Since I do mostly python and never had chance to use graphql I created backend application that in future might be some sort of accounting app using graphene-django
3. Recently I also started to like vue insted of react and especially nuxtjs because everthing works there out of the box. I created some simple frontend in nuxt.
My microservice flow is like this:
nuxt -> django -> java -> pdf -> django -> nuxt
And we got pdf as an output.
Of course it would be a big deal to run this shit and configure it.
Fortunately we got docker these days so I created next project that use docker-compose to run all of those apps in single command like. bash run.sh

Probably this is to much for generating from one to three invoices per month but what I done can be easily scaled to millions of invoices, so feel free to use and modify it ^^. Also it was fun to see how graphql works by doing something useful for myself.

Since it is now easy to develop and run I would
probably add some more functionalities to it, and also some more documentation. What I definitely would do is make it international not only one language specific.
But first I need some more time and some rest.

Cheers, have fun.

]]>After almost 10 years of being software developer professionally. And previously 5 years doing it as a hobby. I now see fundamental progressing change in the way that we will be mostly (not all) doing our professional work in ten to twenty years.

The first root cause is a cloud,

]]>https://vane.pl/why-serverless-is-a-next-big-deal/1ca6f9e2-97ef-4eec-ade7-6f373426f944Sat, 07 Apr 2018 13:22:05 GMTAfter almost 10 years of being software developer professionally. And previously 5 years doing it as a hobby. I now see fundamental progressing change in the way that we will be mostly (not all) doing our professional work in ten to twenty years.

The first root cause is a cloud, it started slowly offering virtual machines that we could use for specific purposes and then get rid of it. That's called IaaS - infrastructure as a service. From now on when I want some server I don't need to go to shop plug a cable and worry about the hard drive stop. I just order it via website. That's because people started thinking, what if individual components of traditional computer could be separated and served independent to themselves ? And that was big deal because someone who is making CPU or GPU intensive tasks no longer need to order big disk drive. Now I can choose whatever processor number I want, whatever disk drive size I need. That's the optimisation of resource usage, and maximisation of efficiency.

But then, there are some tasks that everyone need and doing independently using custom software like - starting from the simple ones - convert photos or videos, database access, load balancing web servers, speech to text, text to speech, translate between different languages, mapping software, 3d objects rendering .And those are now served at bigger scale, they call it PaaS Platform as a service.

But there are still some more complicated that are still not separated to simpler services like collecting and showing statistics, bug tracking, email receiving, chat, document processing, customers aggregation. We call it SaaS software as a service.

So we got one functionality that we can build upon our needs and it would scale to infinite number of users and computing power. We are only limited with money.
So we think we got it all but still we need that custom software and developers to integrate those systems to work as we want to.

As we all know the bigger the code base the more error prone it is. The bigger scale the smaller costs. Those are things that don't change.

The second root cause are containers, since it is no longer "works on my machine sentence" we can deploy them scale them using orchestrators and cluster them using different tools. We can make small programs that we now call micro services and allow us to manage smaller portions of code that is less buggy and make one functionality, we can still scale them and register our services as we want to.

But we still mostly work eight hours from 8 to 16 and don't need those services 24 hours a day. Our usage of resources is asynchronous. When Europe sleep on other part of globe people work. we don't use same resources all the time. The same is with the applications
Mostly we don't want to run our applications 24/7, we are not clicking send email all the time. When we sleep we don't need television or light, kitchen or car we need only alarm clock that would wake us up.
That means most of our tasks are scheduled at specific times all over the world.

The second one what we need to get shit done fast is information, all the systems that we are creating trough those years are in fact transforming input to the specific output.
Now when computing power is so cheap some of them are taking videos and photos and writing about colours of clothes or number of people walking to and from our store during day.

That's where server less shines. We could get document A or service B and it's information and transform it's to match our purpose using service C and if we got infinite of those micro services we can glue them together using those lambda functions. We can use this function only from 8 to 16 or whenever we want and then pay only for computing power, network utilisation and storage or store the data on our computer and pay noting stale.
We can maximise hardware utilisation to maximise profits and cut costs. We can write simpler software, we can even write this software using cloud editors.

So from now on I am more focused at looking up at clouds because there is a future.

]]>For side project I am working on I have a problem with long running tasks that blocks requests inside http server for 1 to 30 seconds depending on the task.
I decided to use flask for this project since it have nice blueprint feature that each could be transformed into]]>https://vane.pl/distributed-architecture-with-celery/1b7133ee-67af-497f-bbf8-9649f8e27e4cSun, 25 Mar 2018 20:40:10 GMTFor side project I am working on I have a problem with long running tasks that blocks requests inside http server for 1 to 30 seconds depending on the task.
I decided to use flask for this project since it have nice blueprint feature that each could be transformed into microservice when necessary.
To scale my http servers and distribute workload among other machines I need some broker / publish subscribe / some sort of queue.
Since all my backend stack is in python for rapid development I decided to use celery with redis.
Redis because I can also use it as session store and leverage as cache layer and I don't need to introduce another element or language if I try ex. RabbitMQ
I also decided to store my task results in postgresql since I like sql database to store important data and I can use one of the nice plugins for flask - Flask SQLAlchemy
And also I can later convert my database to some cloud solution like this great cockroachdb created by former google employees without scarifying code since it's using postgres driver.
Since I wrote about almost whole stack but one I want to mention last part which is haproxy that will be my load balancer of choice.
I will draw whole part in great opensource UML plugin/standalone tool umlet. But please forgive me uml champions for my inconvenient diagram.
Ok so now to the celery itself since it's healthy to eat it.
First I want all my workers be classes so I can define some internal methods if I want to.
Obviously I am not building distributed calculator from examples but actual application ( I hope so )
So instead of this :

I can also see my peresisted result inside result.db using great opensource sqlitebrowser.
There should be one entry inside celery_taskmeta table and if we click twice into result there should be hello siema inside that blob.
So that's it I can now leverage multiple files with workers and use them as I want to. Also I can run 2 workers in 2 terminals to see which one would pick task and how the tasks are distributed.
All the code from this celery example is available here.
enjoy

]]>One of my tasks at my daily job I had to do was migration of 4 year old svn repository into git.

]]>https://vane.pl/migrating-svn-to-git-complete-guide/5884698d-c890-4bea-84c4-0c83a303da3bMon, 26 Feb 2018 23:03:40 GMTOne of my tasks at my daily job I had to do was migration of 4 year old svn repository into git.

Well I technically got no branches and all history was in one single branch. Also it was divided to 4 directories and each of it was partly what I wanted to be master. Also there were separate directories for each release I wanted to be branches so I won't loose history and also could go back where I want.
To describe how it looked like:

At first I tried to use tutorial mentioned above but got some failures, the result was not what I was expecting but at least I learned something.
And that is great about programming you experiment you fail and learn.

To get some more knowledge I started reading great documentation of git svn and of course found some great post and post on mighty stackoverflow.

I got that ideal repository structure in mind that you saw above and I didn't want to drop history since it's great to see how project evolved after those many years. Since repository would be on github that has some nice charts in insights I was excited to see this magic.

Back to the point.

After getting some knowledge I was confident enough to try with simple proof of concept and migrate part of my repository.
I started by creating empty repository and modify .git/config file by adding something like:

since I didn't have duplicate names in release_part1 and release_part2 I was safe with wildcard branch creation and didn't have to type all 32 branches by hand.
After running git svn fetch to my surprise it worked great.

So I got a working plan of first step. I removed everything, backed up my .git/config, created new repo from scratch and modified my config file to get remaining stuff from svn and finish my work.

I was happy, but since we wanted to migrate this repository to the public place I got to get rid of sensitive data from files across all commits and also I wanted to alter my git history to match accounts on github so I could finally see those colorful charts.

I thought I would deal with history alteration as my second task.
So I found this stackoverflow answer and also great post on github about git filter-branch --env-filter and since I got more then one contributor in that particular repository over years I needed to alter script a bit to match my needs.

Well I wanted to run this script only once and there were only eleven contributors so it wasn't pretty.
To find contributors I run simple git one liner git shortlog -sne that shows:

Yup I copied those ELEVEN times.
It took about 1 hour to deal with 10k commits so I was patient and let it finish it's job.
I even didn't make proof of concept at that point and it was all good (that was before I found mistake).

Third task that was hardest in my opinion and required some coding turned out to be very time consuming but simple because there is some fucking awesome toolgit bfg that will replace all usernames and passwords with ***REMOVED*** string.

So I dig trough files on the repository and found lot's of usernames and passwords that I wrote to wrote those to simple file passwords.txt and it was indeed very exhausting and my file looked like that:

username1
pass1
pass2
usernameN
passN

cause I didn't wanted to use regular expressions
It turned out I didn't have java8 so I need to install it on my vm that I was using to migrate this repo.

The rest was simple stupid one command that took about 10-15 seconds cause this tool is blazing fast:

bfg --replace-text passwords.txt my-repo.git

then I cleaned history by running command that pops after end of this script to cleanup git refs history and it was done so I added the remote to git and pushed my work.

At the end it turned out that I needed to run my stupid history alter script twice more because at first I made some mistakes and also it turned out that script didn't deal with branches.
So what I need to do was delete history git update-ref -d refs/original/refs/heads/master
and then push new history using git push --force --tags origin 'refs/heads/*'
At the end migration was finished and I could deal with another tasks.

Hope that would help someone on how to migrate his repo from svn to git cause it's not so painful as I thought it would be.
See you next time.

]]>What if we just forget about today's world and assume our phones, tools, cars, operating systems, word processors all of this is technology that is one person. The only problem with this person is that there are three of them yet we call it as one - simple technology. Well]]>https://vane.pl/if-technology-was-a-person-technology-people/8e56f8a7-c2a1-4737-9eb1-174981be5370Wed, 20 Dec 2017 01:18:00 GMTWhat if we just forget about today's world and assume our phones, tools, cars, operating systems, word processors all of this is technology that is one person. The only problem with this person is that there are three of them yet we call it as one - simple technology. Well there are believers that exalt hardware over software but we need software to run on hardware to see the data. And the data is the weirdest one. We can send and receive data, measure data, modify data but we cannot see it without technology. Without that mighty pie charts, pivot tables and animated beautiful presentations.
Finally when we see technology, there won't be any of technology without people using it and there won't be so many living people without technology.

So technology was created based on how people look like yet people are moving and behaving like technology want them to.

Let's start from the people then, cause there are many prising technology freaks out there.
The biggest freaks are those that belief in technology, so they created religion where technology is only answer to their problems and questions. When they feel bad they seek their diseases using technology brain internet and access it by so called search engine. When they are lost they ask technology communicator interfaces that they always have with them about route. The technology is so important for them because they can always ask it for help or talk about their life. It always listen and try to help. Technology is good for them because it doesn't feel offended, is always awake, and it feels so personal. So if they come home at night drunk and angry because whole world is against them they can beat frustration out of technology and nobody will do nothing about it. The day after the technology will not ask questions about tomorrow but will serve them like nothing happened.

The worst of people amongst those believers are extremists that give You technology for free, they give You free internet so You can use it, addict to it and become part of their worship. Be in constant touch in technology is their goal. They are like sect, they always tell you that it will make Your life easier, that it will solve all your problems. Technology for them have no flaws, if somehow technology talking interface break that is always Your fault because You didn't use it properly.

There are also technology crusaders that fight with technology outlaws. They even convinced governments around the world that you cannot have money without technology. You need to talk to technology so technology can reply what You told to IT to Your friends instead of You talking to Your friends directly. You need technology to access your bank account, to run company, to pay taxes. So people need to waste their time to learn how to talk with technology to make money to pay for being able to live where they were born instead of live their life as they want to, make money if they want to and talk with technology when they want to. Crusaders want everyone to become technology slaves. So no one can talk to each other, do something without technology and there won't be any other technologies besides the right one.

There are also clairvoyants that talk trash to wealth and powerful people using technology answers. They know how to access technology in the clouds to do so. They learned how to ask technology many questions at once using specific code called map reduce so technology can give one answer to those questions.
Then they then make those animated and beautiful presentations how technology helped them answer those stupid questions so they can hire technology again to ask it more questions. The problem is there will be always questions, and there will be always answers, and there will be always stupid people. They are also very powerful like crusaders so governments started asking them questions like how many people paid tax this month? how many people will die in car accidents today ? How many technology talking devices was sold this month ? The number of questions and answers are infinite.

There are also technology terrorists. They got drunk with technology so technology told them some of their flaws. With those informations they can obey rules with people to technology glowing interfaces on this world. So people thing technology will answer questions they ask, but in reality those questions are answered by terrorists. Some people belief that terrorists already chosen one president on this world because they talked to so many people and tell them so many false answers so they voted against their will.

There is last people that are now growing their power exponentially each and every year. They call themselves bankers but I will call them data worshipers. They try to convince all the people that technology talked data written in special way so nobody can modify it without knowledge of majority has so much worth they already sold billions of dollars of it to those stupid people. They assumed everyone on this world is pure egoist and there are no two people on this world that can trust each other. They sell it as a currency many times but with different names so when it gains worth there will be always less worth currency so they can persuade poor people to buy it and tell them they become rich.
They also want other people that they cannot sell anything to work as miners so they can buy cheap data from them and sell this data to poor people. And like always they got percent from sale, and are safe from this data.

Those were some of the most dangerous people that talk with technology but don't forget about middle class and ordinary one that talk with technology. Because if there is less people that don't talk to technology there is less people that technology knows nothing about.
Since all the data is around us in the air and we only need brain interface to talk to it, there are also plans to put those glowing devices on our eyes in our skin or brain. For now they started by trying to put technology into every mechanical device we know. So our washing machine, fridge, light, car won't run without technology. The problem is that each year I need more advanced, talking technology device. Because technology like we humans, grow during all those years, it grows so fast all old talking devices are already obsolete before you buy it.

On top of middle class are technology direct talkers. They talk to technology directly just like terrorists but they have their moral beliefs or are not so fluent in technology speech to become ones. They use special language called algorithms to ask questions to technology. Because those questions are mostly asked by corporations, run by bankers, clairvoyants or even terrorists there is mostly need more then one technology talker to arrange algorithms to ask specific question. This question mostly need to look good so there are also some technology makeup guys involved in making it. I won't dig into those questions for now, because it is a very different story.

Then there are technology users that work like ants or bees. They are asking same already prepared questions to the technology everyday, unable to ask own questions because they don't know technology language. The problem with those questions is that they always get different answer for their question in different time. But they don't bother why, because they worship more their family then this stupid job for corporate employer. They are paid well enough so they don't resign from this stupid meaningless job, so they are perfect for this also because, they don't ask difficult questions to their boss. Even so their boss don't have this answer, also his boss don't have it, so why they are doing this job five days a week eight or even sixteen hours a day ? Nobody knows and those people are only small cog in big corporate machine. They are blinded by fact that they can be promoted to become boss some day. Those questioning people also know the promotion ladder so they know if they do their job, without asking what the hell I am doing here, after twenty years they will be earning double what they are earning now.

Last but not least class is poverty people that are majority of this technology dominated world. The technology rule their lives everyday, they stop when technology shows red sign, they clap when technology says clap, they laugh when technology shows movie with laughter. They buy products that technology advertise to them.
They do that because of different reasons, sometimes because of their personalities that won't allow them to stand on top of this technological farm, sometimes because they were born in poverty and nobody had knowledge, time or money to teach them basics of talking with technology so they could become middle class. Most of them are good and helpful people, they don't know why and when the world become so obsessed with technology. But they also use it, because they get it when they want to call their friends, or talk to their families. They were crusaded by technology worshipers. On top of those are many of technology junkies, that work hard to get new technology talking device so they feel better, but they don't know why. Many of them want the technology that is grown on fruit trees, this is their so called entrance to the paradise, nevertheless this technology looks like someone took a bite from it. They wait for new grown on trees technology talking devices every year on the streets to become first to talk to technology with those.

Those were most of the persons but there are many more of them. I didn't wrote about question answer devices, hardware and software. Not much about technology character that is more complicated then we all think.

I hope it was entertaining and reading this in english didn't hurt to much. And remember all of above is just a fantasy, so don't care to much.
;)

]]>This will be short post on how to build and run docker registry from sources.

I need a registry to deploy my docker images to use them with docker swarm. So the docker swarm nodes will pull images of my applications from registry.
So there will be single point where

]]>https://vane.pl/build-and-run-docker-registry-from-sources/9d435d79-dd01-4000-badc-8d0b076f4094Sun, 19 Nov 2017 23:29:00 GMTThis will be short post on how to build and run docker registry from sources.

I need a registry to deploy my docker images to use them with docker swarm. So the docker swarm nodes will pull images of my applications from registry.
So there will be single point where all the machines in cluster can download custom application.

First problem is that docker registry is not available on arm architecture for raspberry pi.
Fortunately it is opensource project so I can download it, build and run docker image.
Before that I need to setup golang development environment by simply installing go compiler and set up GOPATH

sudo apt-get install golang

then I will set up GOPATH by editing .profile in my home directory (or .bashrc if there is no .profile file):

vi ~/.profile

and add lines:

export GOPATH=$HOME/go
PATH=$PATH:$(go env GOPATH)/bin

after reload profile (source ~/.profile) or login again to shell I can check if everything is set properly by using:

go env

and it should output something like:

GOPATH="/home/myusername/go"

now to build registry I will just invoke:

go get github.com/docker/distribution/cmd/registry

and after a while I can use registry command so binary is build and located in $GOPATH/bin

If I consider to run registry in public network I need to consider adding trusted tls certificate like let's encrypt.

Ok so now I have docker swarm on three nodes and docker registry ready for some scaling fun.
Next will be building custom docker image putting it in registry and scaling it.
Adding / removing nodes and using docker-compose.
Also after a month or so looking on those configuration madness I think it is a good idea to make a simple python webapp to automate shit.

So stay tuned.

]]>I found docker interesting technology something about 2 years ago, at this point I want to use it on larger scale then one machine but not necessary using cloud or kubernetes.
I just want to be able to create 3 physical machines cluster ( swarm ) and deploy applications ( services ) on it.]]>https://vane.pl/docker-install-with-swarm-from-scratch/9c357edd-d35b-4e2f-bc8f-cce68bf7d603Wed, 11 Oct 2017 23:22:00 GMTI found docker interesting technology something about 2 years ago, at this point I want to use it on larger scale then one machine but not necessary using cloud or kubernetes.
I just want to be able to create 3 physical machines cluster ( swarm ) and deploy applications ( services ) on it.

1. Installation Of Docker

First I will install latest docker version just by using those commands:

Probably I could also use some proxy since dockerd is simply http server with json api, or at least som environment variable, but this way I can learn something more about systemd shitty configurations.

3. Installation Of docker-compose And docker-machine

Third I want to also use docker-compose to manage my multiple docker container application and since it is python package I will install it with : pip install docker-compose

So now since I got all lame docker prefixed shit in place including docker,dockerd,docker-machine,docker-compose I can add some machines to maintain and finally start to swarm my machines with containers.

4. SSH Keys

So I will be adding 3 machines and I want to access to them without password.

if someone want to add some machines to the swarm later or simply want to add more managers to retrieve specific command just invoke:

docker swarm join-token worker
docker swarm join-token manager

Next I will try to write about how to build docker applications, use docker-compose and deploy it against swarm cluster.

]]>To be quick - for recent idea I need to segment audio file, perfectly after one speaker finish his sentence and second speaker start talking. Also I need to detect number of unique speakers in audio.

There are plenty of libraries out there from lot's of Universities or proffessor or

]]>https://vane.pl/sampling-audio-files-with-python/001eaa2d-71a0-45e9-be60-09aa7cd3c07fWed, 27 Sep 2017 20:17:36 GMTTo be quick - for recent idea I need to segment audio file, perfectly after one speaker finish his sentence and second speaker start talking. Also I need to detect number of unique speakers in audio.

There are plenty of libraries out there from lot's of Universities or proffessor or Phd's. All of them are commonly using same ideas from speech recognition topic. I won't be digging to them in this post. Unfortunately none of them is working perfectly for this task.

So I got my hands dirty in audio by using python and I found out I need some utility to help me with cutting / plotting audio files so I can progress faster.
I created some utility class for doing it using scipymatplotlib and numpy. There is not much code now but you can use it freely if you want to:

open file and convert it to mono

cut seconds from audio file

get seconds of audio file

process audio file using custom method

save file wrapper over scipy.io.wav

spectrogram with matplotlib.specgram

You can try it just by typing:

python audio.py test.wav

But as I wrote before you need scipy, matplotlib and numpy to use it.

If and when you manage to execute it against your wav file without errors you should see spectrogram chart with wav file plot. On X axis it would be number of seconds. On Y probably signal amplitude but I could be wrong.

If you look to the end of the code it is simply one liner.

spectrogram(*to_mono(sys.argv[1]))[0].show()

I will try to update library and also this post with list of features as I need some more methods.
Also maybe I will write something more about my audio segmentation and speaker diarisation struggles.

I embedded github gist below so it will be easier to copy it.

]]>Ok so when trying to make a new project the biggest decision is chose the right tools for your needs. To do it you need to know what your application is about and if there is already something that will help you write it.
That's because when you are in]]>https://vane.pl/some-words-on-making-prototype-software/38847b56-edc6-4e1a-9de1-682c68d64a22Sun, 17 Sep 2017 20:24:00 GMTOk so when trying to make a new project the biggest decision is chose the right tools for your needs. To do it you need to know what your application is about and if there is already something that will help you write it.
That's because when you are in small team if you try to write everything by yourself you will simply waste a lot of time.

Ten years ago to solve software development problems I needed to read book, find people that are writing software in my technology dig trough their blog, ask questions on mailing lists or think about solution and implement it by myself looking at manuals.

Now the software making is hundred times easier, all thanks to opensource movement. Well show me person that thought in 90s that microsoft will make profits by running linux and other third party software. Opensource and all the online courses are the biggest community of developers I ever seen in my life. And because most of the challenges were already solved thanks to tools like stackoverflow or github, where millions of people share their knowledge.

Let me just explain that by example how simple it is now to hire 328 people for free just by showing you github project bar with 328 contributors.

Of course I cannot tell them what to do but I can freely use all their work for my own project and if it will make me profit I can appreciate their work later. And that is how it works, that's amazing times. It's not something that you see in other businesses, but it's changing THANK YOU OPENSOURCE for that.

The biggest challenge I see now is now to find those projects made around the world and compare them with each other, but maybe some smart guys will find the way to do it.

Back to the topic, first things you need to do before starting prototype is gather development requirements. It could be done just by find and isolate all components you need in your application. Distinguish all interaction between users.
Do it just by asking questions and googling possible answers, make mistakes, repeat and finally you will have everything in place.

Ex. I want to make an application, and I want to start with web application.

Some questions on backend.
Do I need user account? Do I allow to login using social network.
Do I allow to interact between users ?
Do I need a search and what I want to search ?

Some questions on frontend.
What components I need, a table, calendar, list.
Is it a single page application or multiple pages ?
Do I need to get it working on mobile devices ?
Could it be an mobile app ?
Do I need some special components like sortable table, autocomplete, calendar, treegrid ?

The second thing is to find right projects.
Right projects is the tricky part cause software development is all about uncertainty, technical depth and turnarounds of technologies.
That's the things you, business companies, and customers need to realise, software is obsolete before the start.
Your new products when are started to be written by developers are already old because of nature of the world. We as a people don't live forever and all the code we wrote will be either replaced or put to trash. That's exactly what is happening with cobol software now and this is one of the biggest challenge in the software development history.
So the less code your company wrote the less code it need to maintain. The more modular the application is the more easy it is to replace some outdated components. And there is no right answers to questions only possible found temporary solutions. There is no one master language or technology, one database that will drive your product.

There is also a cloud that is trying to solve some of above problems but I won't write about it now, because even cloud is evolving and can be easily outdated by containers and lambdas. Some of the important questions when find a project you need to ask are:
What is the license and is it permissive so I can use and modify code?
When was the last commit in this library ?
How many contributors it has, and are they only from one company, country or multiple sources ?
How big is the company, companies that are developing/using this library ?
How frequent are the library versions update and are they backward compatible ?
Is this library need to be developed or those standards won't change in 10 years.

The last question is usually most subjective and based on current market situation.

Ex. Frontend development.
Browsers are the most used and frequently developed software in the world now, but at least for now they are also backward compatible. So you can use some old library but you need to face that most of the users are migrating from it and you won't find support when encounter problems.

And software development is mostly problems, mistakes and solving them.

Ok so you got components defined (at least 2 of them), some initial view planned, some requirements are written either on the paper or using some fancy software. Don't lie to yourself that you are safe now at this point cause everything will probably change hundred times between development cycles, so be elastic.

To start writing some code all you need to do is take your favourite editor make some imaginary duct tape in your mind and connect those pieces. Maybe make some decorators if you are certain about it. Ex. on the frontend part.
When you find two libraries that you want to use you need to put them in the same webpage and find if they play together nicely. Their css styles don't overlap.

Make some interactions, put some mockup data, don't read the manual unless you are stuck. If you want to be a good duct taper write code.

Make a small working thing, only a proof that what you are trying to achieve is possible. don't waste your time on starting from authentication cause you don't have users, or you are writing some CAS product.
Don't start with interactions between users unless you are writing chat.

Start with something minimal, let it be only a "Hello World" text but using all the tools you need. All the servers you need so you run it and try to modify this text to buttons you want, layout you want, protocol you want, data you want to get.

Let code be ugly, but working.

Being perfectionist is good but if you are not fixing any critical parts of software and only writing some prototype, it don't have to be perfect at start. It never will be and in a year hopefully you will see this code and ask yourself why I wrote this so dumb, just because we are only humans we make mistakes and evolve by learning and practice.
The more you practice, the earlier you find your mistakes the better software you will write.
Don't write fast code, for sake of readability, make multiple variables to be more expressive so you can understand it later. Name your methods by what they do even if it will be long named. Ex. Instead of writing stupid documentation one liners how this method interacts with database and how smart it is name method insert,update,delete not add,replace,remove.

And most of all keep asking yourself a question do I need to write this code or should I find and use library to do it for me.

Be a duct tape developer, contribute your work to opensource when you can and save the world from software flood.
Happy coding.

]]>Some time ago I found funny website that You can test Your personality.
It's based on Myers–Briggs Type Indicator that is similar to Carl Jung approach.
So my personality is...
Thanks https://www.16personalities.com for fun.
Since it's sometimes schizotypal personality, welcome to the freak world, yeah.
Also]]>https://vane.pl/my-personality-is/6cb74f5d-d914-41fe-88a6-062be2af87b6Thu, 13 Jul 2017 21:42:09 GMTSome time ago I found funny website that You can test Your personality.
It's based on Myers–Briggs Type Indicator that is similar to Carl Jung approach.
So my personality is...
Thanks https://www.16personalities.com for fun.
Since it's sometimes schizotypal personality, welcome to the freak world, yeah.
Also nice to read thoughts from other people with the same personality.]]>To automate browser I need to list all user interactions with it. I divided them into two sections.
First are webpage interactions so keyboard/mouse/focus events.
Second are browser window/tabs/address bar interactions.

So at first let's list webpage interactions cause those (besides selection) are fairy simple.

Browser

]]>https://vane.pl/user-interactions-with-browser/8876e9e4-e98b-413f-9beb-bb9a60f23a7fWed, 28 Jun 2017 00:19:00 GMTTo automate browser I need to list all user interactions with it. I divided them into two sections.
First are webpage interactions so keyboard/mouse/focus events.
Second are browser window/tabs/address bar interactions.

So at first let's list webpage interactions cause those (besides selection) are fairy simple.

So those were some basic stuff I want to record to automate browser interactions. When doing so I need to know the place in DOM where interaction occurred and what were the effects of it. Sometimes to get effects I need to listen to the browser itself.

Great is that browser API named WebExtensions is now mostly cross browser compatible. So I can write once deploy everywhere with small modifications.
ex. between chrome/opera and firefox it would be at least:
if(typeof browser == "undefined") {
var browser = chrome;
}

but webExtensions API is not the point of this post.
So getting back to browser interactions.

Let's start with webNavigation so I can know what's the current status of the webpage.

The last one are webRequest actions on webpage so we can store data for later usage. Also sometimes we need to delay next browser interaction and make it after the data is loaded so when we are recording webRequest we would know when to wait and when just simply interact with webpage ( hope that make sense). Let's not forget that webpage actions are mostly asynchronous.

onAuthRequired - interested - want to know when we need some basic authentication

onSendHeaders - not interested - information event more important is onBeforeSendHeaders

Also there are more browser api. I will be also considering those in future as most interesting: contextMenus/cookies/history/runtime/sessions/windows.

But for now I will focus on events I listed above as the foundation of browser automator project.

As You can see it needs a bit of work if You want to listen to browser actions. I also hope that replaying those actions can be done all in javascript.

My focus is to create the browser automator that work locally without any cloud or internet connection.
Every action will be stored in localStorage of the extension.
I already know that webpage and browser interactions could be recorded into set of user actions and then saved with name.
What I will focus next is replaying those interactions and also replaying browser actions.

Important points to consider when doing it is creating pauses between actions to pause recording / player.
Allow manual modifications of set of actions and create some sort of universal pseudo description language of those interactions.
So stay tuned for some more insights from struggles when building browser automation.

]]>For those who don't know ANGLE project is not a microsoft one which apparently is first in result when I type it on duckduckgo but google founded Almost Native Graphics Layer Engine that is part of, to name a few: Chrome, Firefox or Qt.
It's great piece of software I]]>https://vane.pl/compile-angleproject-from-source/2970cc38-3fbe-4ee8-a08d-0ba6c3703e00Sat, 20 May 2017 00:55:37 GMTFor those who don't know ANGLE project is not a microsoft one which apparently is first in result when I type it on duckduckgo but google founded Almost Native Graphics Layer Engine that is part of, to name a few: Chrome, Firefox or Qt.
It's great piece of software I always wanted to dig into because it's behind webgl engine of chromium project.

I had some inconsistency in great getting started document so I would like to provide step by step guide how to simply run hello_traingle.
I assume you got at least git installed. I don't know if I got any more dependencies.

So we want to make those tools executable from angle directory so we will either do it once by executing: export PATH=`pwd`/depot_tools:"$PATH" or add path of depot_tools directory to ~/.bashrc if you want permanent change.

Next commands we will execute in angle directory location. So simply cd angle

After that on mac os x or linux we want to change our default renderer therefore edit src/libANGLE/renderer/d3d/DisplayD3D.cpp

and after it finished compilation you can see triangle by executing ./out/Debug/hello_triangle

Here it is:

I hope you managed to do it.
Enjoy

]]>Internet have big impact on our lives. 3 years ago when I was traveling using Warsaw communication system there was about 10% people that were looking on their phones. Now we are at least 30-50% of people looking for something interesting. Something what will get our miserable lives out of]]>https://vane.pl/internet-time-travel/2768434a-8f77-44b4-9ce6-f133e690ff09Thu, 18 May 2017 22:32:36 GMTInternet have big impact on our lives. 3 years ago when I was traveling using Warsaw communication system there was about 10% people that were looking on their phones. Now we are at least 30-50% of people looking for something interesting. Something what will get our miserable lives out of this real world problems.

Internet is tv of our times and deal with it oldtimers that are in governments. Because you cannot make anything about it even if you try to. But it won't last long at this form.

Time is a big impact in this problem. As a part of internet society, also as a person that have some input in this field I know how hard it was to solve technical problems without stackoverflow.
Or if we dig into the beginning of search. Before google there were only big catalogues of websites, managed by people working for example for yahoo.
Now let's step back in time even further and imagine our grandfathers digging trough paper encyclopedia when solving crosswords. Let's imagine me before wikipedia was born, really ? Even now not everyone trust it.
So now for some social experiment type "Woman in blue outfit playing tennis" using your search engine. How many of you just clicked the link ? And now how it happened that we got what we wanted ?

There is also second thing about internet that is changing the world that we won't know in 10-30 years.
You cannot control it, and if you try to make some obstacles the technology will advance so you all humanists as I call regulators will be always behind. Pronounce in mind one name of your friends who are living in Europe and looking at those annoying popups about cookies. Then tell me about people that are worried about their privacy but still using some IOT or smart TV. Show me people with computer, look at the street in your town, look for person without smartphone and camera. There are those places and people but they are minority.

Time is the only limited material during our lives. Not money, cause you can always earn it by scarifying time. Either you gain trust by diploma, try to selfstudy or you find some other ways (not always right with law) to outsmart the system.
And that's where internet ships with time travel.

One of the biggest industry that understands internet and technology is music. It's not the very first with innovation but it adapted a lot and it has one of the biggest impact on our lives cause we hear music everyday.

When people wanted to listen music trough internet in the past one of the first was napster. Then when money flow in other directions and big companies started to feel some of it is missing in their pockets they shut it down. Luckily after music industry stopped to be angry and realised that they cannot do anything about it they simply adapted.
To adapt they need to understand the internet. And now after those 17 years probably they understand it better. Probably they don't understand it all but they at least understand how hard is to gain money this way. When you can always try before you can buy, and you have access whenever you want to the second you want. Internet is traveling above time in music industry now and it still work. At least it's limited only with Einstein theory of relativity.

To keep on track in our time travel second biggest impact in search. When we simply search what we like was not music. Well partly because it turned out that what we want is similar to what is our friends needs. And it is obvious from sociological perspective. But was not obvious from technological that we are similar to each other, we are generation of our times.

So to spend time with people that we like we need to spend it in similar ways, do the same things, eat the same food, or when we are younger be at the same place. That's where facebook comes around.

We also want to know what is going on in the world around us in real time now snapchat ? We want to be aware about our interests and we don't want to wait for news in tv or read this big article about nothing interesting, often subjective and definitely filled with ads that we now even don't see anymore, so we have twitter that puts our information short.
And if we spend less time reading and more time watching images or videos on youtube that we can always skip cause there is always comment about it. We gain time that we simply loose somewhere else.
Hey are we imaginary people, are we real ? Those movie blockbusters from Marvel and DC Comics.

Finally as we are overall all different so also egoistic in our needs we don't want to read this stupid post written by this miserable human who doesn't even know english well. How about hear this words using personal assistant when we are getting to the location cheaply (hello Uber) without waiting for public transport.

I won't go any further cause I know where it's going now, don't I ? Nah I'm just waiting for it ;)

Time is what I value at most. How about You ?

]]>So here I am 2017 year reorganising stuff for more content.

There is now only vane.pl running on docker container behind nginx as load balancing proxy.

I removed all devlog.vane.pl.

I will sum up history of this site:
First I made simple blog using django there. It

]]>https://vane.pl/new-is-better/8f4b058a-4f6f-4987-a2ac-58f3a910b88dSun, 07 May 2017 22:11:52 GMTSo here I am 2017 year reorganising stuff for more content.

There is now only vane.pl running on docker container behind nginx as load balancing proxy.

I removed all devlog.vane.pl.

I will sum up history of this site:
First I made simple blog using django there. It took me 2 hours of setup, but was spammed in comments by boots after first post. I still have this database with over 55000 comments under this post.

So I created custom blog using reactjs on frontend and tornado on backend. All was working using REST API. I even managed to make some smart text editor using draft. But it was still much unfinished project.

Recently it turned out that after 18 years google is still unable to index javascript. Shame on You. As a result all history of this blog was removed from google sigh.

So today I decided to move 'everything'. Well roughly 3 blog posts to ghost on my local computer by running it on docker. And see how it's performing.

It surprisingly worked well and I was able to do custom theme no different from old blog in 4 hours.

Then I exported docker container and imported it on this host as image.

I wrote some drafts on medium and on linkedin but I never published them. I am not confident to publish on third party platforms. They often got some publish policies I don't understand and sometimes got aquired by companies I don't like.
I tried some twitter but it's not giving me more then 255 characters dude and this is already 400 words.

As during those years I've seen lots of technologies to come and go. Most of them nobody remembers now but are still used somewhere in corporations.

Still my content on my site is what I think it represents me only, nobody can take it down for whatever reason. Nobody will decide under shut doors on top floor in the corporate suits that they will change policies and leave me. Whatever...

Hopefully since everything is back on track and I have great new blogging platform now. I will try to write more.

Working by creating platform for writing and not writing was not what I meant this site for.

I still need to improve my poorly english writing skills so chill.

Last blogging session - I don't remember it now. But it was for 2 years from 2010 to 2012 now it is 7 years later.