An essential aspect of software delivery and development is the collaboration and communication that takes place between operations professionals and project management teams.

IT experts, programmers, web applications developers, and DevOps experts have worked together to create numerous tools that make this possible. Discover precisely what DevOps tools are, why you need to track KPIs and metrics, and how to choose the right one.

Explore our list and decide which one(s) you can use to assist you with your everyday tasks.

What is DevOps?

In short, the term “DevOps” is a combination of the terms development and operations.

The term refers to the tools, people, and process that work together in software development processes. The primary goal is to create a faster, more streamlined delivery.

DevOps use technology and various tools of automation to increase and improve productivity across teams working together. When you are working to scale to your project, DevOps tools are going to help you get there faster.

How to Choose the Right DevOps Tool

There is no secret method for choosing the proper DevOps tools. You are going to be implementing them across a variety of operational and development teams, so it should be thought of more of a shift in the existing culture.

No single tool works across all areas of development and delivery. But several tools will work in different areas. You first need to discover your processes, and then you can more easily determine which DevOps security products you will be able to utilize successfully.

A straightforward way to break down your cycle of development is by doing so in phases.

The main phases are:

Collaboration – deciding which tools everyone can agree on and share across multiple platforms for complete integration.

Planning – being able to share ideas, brainstorm, comment, and work towards a common goal

Build -includes the development of software along with coding against any virtual or disposable duplicates to speed up production and get more accomplished.

Continuous integration – obtaining constant and immediate feedback through the process of merging code. It happens many times a day using automatic testing tools.

Ideal for smaller teams in need of a fast, user-friendly configuration management tool. Developers working with dozens or hundreds of team members should use Puppet. Developers in need of a quick, light and secure management tool should consider Ansible.

Gradle has been around since 2009 as an alternative to Apache Ant and Maven. A building tool that allows users to code in C++, Python, and Java among other languages.

Supported by Netbeans, IntelliJ IDEA and Eclipse, and used by Google as Android Studio’s official build tool. Gradle has a learning curve owing to its Groovy-based DSL. Gradle is worth that extra time investment for the time it will save in the long run.

Gradle is estimated to be 100 times faster than Maven. The increase in speed owes to Gradle’s daemon and build cache.

The team has released a Kotlin-based DSL for users who would rather skip the learning process for Groovy.

CodePen is made with both developers and designers in mind. It is a social development platform meant to showcase websites. Developers can build web projects online and then instantly share them.

CodePen’s influence extends to building test cases and fueling innovation. Coding results are viewable in real-time. CodePen is a place for new ideas, to improve skills, socialize, and showcase talents for an employer.

The code can be written in a browser.

A variable editor is suitable for different code levels.

Focuses on preprocessing syntaxes that associate directly with HTML, CSS, and JavaScript.

TypeScript is a popular solution developed on GitHub. It works with any JavaScript host which supports ECMAScript 3 and newer environments. TypeScript is best suited for large apps with robust components and productivity.

Developers use TypeScript to leverage complex code, interfaces, and libraries. It increases efficiency when coordinating JS libraries and workflows. Code refactoring, defining interfaces, static checking, and insights into the behavior of libraries work seamlessly with TypeScript.

Vue.js is a front-end solution for building web interfaces. It is a JavaScript library that exists as an elaborate framework. Vue owes some of its success to its streamlined design and cutting-edge approach.

Vue is easy to learn. Its scaled solutions appeal to a variety of developers. UIs and single-page applications can be built using Vue.

Vue is a progressive JavaScript framework existing as an MIT-licensed open source project.

Several tools are coordinated with the JavaScript core.

Vue is widely accepted by the developer community and is continuing to grow.

Designed from the ground up to scale as an overview library to help streamline complex single-page applications.

Angular has been one of the top front-end solutions for years. Its success is owed to being a Google product but amassed a diverse following among the Github developer community. Its latest version is considered a significant improvement in technology.

Angular can build web applications for both mobile and desktop platforms. The structured framework dramatically reduces the redundancies associated with writing code.

Angular is open-source.

Created from the input of a team at Google, corporations, and individuals.

Django is a powerful Python web framework designed for experienced developers. But, it can also be quickly learned. Django emphasizes practicality, security, and efficiency to ease the development of database-driven websites.

Django Supports projects on the back-end of development. Developers can work liberally because Django helps them avoid common mistakes. Apps can be written more efficiently using the flexible framework.

Django is an asset for fast-growing sites. It facilitates dynamic applications and rapid scalability.

Continuous Integration Devops Tools

A CI/CD server solution developed by Atlassian. Bamboo works from the code phase through to deployment, delivery, and continuous integration.

Compatable with Jira Software, Fisheye, Crucible, and hundreds of other tools. Bamboo is available in a variety of languages. It features a plethora of functions including those for both deployment and searching.

With dedicated agents, you can run fixes and builds instantly to keep production moving. There is a clear visualization of all JIRA software issues so that each team can decipher what they need to do before deploying and throughout production before anything goes live.

For many users, the cost of Bamboo will make it a hard sell compared to Jenkins. For projects and teams with a budget, Bamboo may be preferable for a few reasons. Pre-built functionalities mean that Bamboo’s automation pipeline takes less time to configure than Jenkins.

Three builds are capable of running at the same time with extra agents allowed to be added as you need them. Before you decide to make any changes, you can run a build, check, and complete automated testing.

Whenever you want to run a report on the build, you can. You don’t have to wait for something to finish up before figuring out something is going wrong.

A forum is available that provides access to peer support, or you can file a request to have a feature fixed or repair any bugs.

Sublime Text is a text editor for coding, markup, and prose. It is a sophisticated cross-platform solution with a Python programming interface. Sublime Text natively supports languages and plugins under free-software licenses.

As a high-level tool, Sublime Text requires time to master. The focus is on performance over functionality. The UI is friendly but comes with remarkable features.

Plugins augment the built-in functionality of the Python API. Its package ecosystem provides easy access to thousands of community-built items.

Sublime Text is free to evaluate, but is proprietary and requires the purchase of a license.

The main focus of Sumo Logic is log data. It’s a tool that’s built to help you understand your log data and make more sense of it. To do this, you call upon a variety of features that analyze this data in immense detail.

Sumo Logic can provide your organization with a deep level of security analytics by merging this with integrated threat intelligence.

Can be scaled infinitely

Works with Azure Hybrid applications

Helps reduce your downtime and move to a more proactive monitoring system

Postman is used for performing integration testing on APIs. It delivers speed, efficiency, and improves performance. Postman performs well at both manual and exploratory testing.

The GUI functions can be used as a powerful HTTP client for testing web services. Postman markets itself as the only platform that can satisfy all API needs. It supports all the stages of the API lifecycle.

Developers can automate tests for a variety of environments. These tests can be applied to persistent data, simulations, or other measures of user interaction.

Deployment Applications

A DevOps automation tool, Jenkins is a versatile, customizable open source CI/CD server.

The Butler-inspired name is fitting. Jenkins can, with proper instruction, perform many of a user’s most tedious and time-consuming tasks for them. Success can be measured at each stage of an automated pipeline, allowing users to isolate specific problem-points.

The pipeline setup can be imposing for first-time users, but it does not take long to learn the interface. Jenkins is a crucial tool for managing difficult and time-consuming projects.

Jenkins runs on Windows, Linux and Mac OS X.

Jenkins can be set up with a custom configuration or with plugins.

Jenkins has been criticized for its UI, which some feel is not user-friendly. Many users take no issue with the interface. This is a concern that seems to come down to personal preference.

Continuous delivery is possible with CA Release Automation’s deployment that can happen at regulated speeds across your entire enterprise automatically.

What used to take days can be done in just a few minutes so that there is no unexpected work popping up out of nowhere slowing down your productivity. You can be the first one to the market with shorter release cycles that happen up to 20 times faster than before.

Every complicated aspect of applications, environment, and tools are controlled by one program. Your visibility will increase, and you will see your reliability and consistency improve as well. Errors in production have gone down for some as much as 98%. It is both cloud and mainframe ready for quick and easy integration to your existing infrastructures.

Container, legacy, and cloud environments are all capable of setting up automated deployments with XebiaLabs software delivery pipeline.

The likelihood of having failed deployments and errors during the process reduce and speeds increase. You stay in control of the deployment with a self-service option.

Visibility improves inside the status of deployment environments and applications. The DevOps tool can easily be worked in with the current programs and systems that you are already working with so that everything across public and private clouds is completed with ease. Enterprise security and centralized auditing are all capabilities of the XebiaLabs.

Developers can reduce time spent on the administrative side, allowing for much more to be done in a shorter time frame.

Operations & Devops Monitoring Tools

Nagios is a free tool that is one of the most popular DevOps applications available. Allowing for real-time infrastructure monitoring, Nagios feeds out graphs and reports as you need them, as the data is being produced.

The tool’s reporting provides early detection of outages, security threats, and errors. Plug-ins are a significant draw for Nagios users.

When problems arise, you are made aware of them instantly. Many issues can even be resolved automatically as they are found.

There are thousands of add-ons available for free, as well as many tutorials and how-tos. A large helpful community supports Nagios.

Every change that happens inside of your program can be seen clearly on one platform with New Relic.

Not only do they offer you the opportunity to watch what’s happening, but you can also fix problems, speed up deploy cycles, and take care of other tasks related to DevOps. The team will have the information they need to run everything in a way that works for everyone.

Better customer, business, and employee value is the primary focus of Pager Duty.

They offer over 200 different integrations across multiple tools so that you can ticket, market, and collaborate with what you’ve already established. Some of the other features offered include analytics, on-call management, and modern incident response.

You will have a clear picture of what’s taking place, any disruptions that are occurring, and get patterns in the performance of your builds and productions throughout the delivery. Rapid resolutions, quick collaboration, and business responses are orchestrated and organized for your team.

Any opportunities that might be available for your company along with risks can be visible with the Splunk DevOps product. Splunk uses predictive and actionable insights with artificial intelligence and machine data.

The business analytics can help you in better understand:

Why you are losing customers,

How much money you could make in certain situations

Whether or not the people that are using your programs are accepting of new features and products you introduce.

Raygun recently released an application performance monitoring platform used to diagnose performance issues. Raygun is user-friendly and conducts much of its work with little set-up. Error reports are generated automatically with prioritization letting users know which problems need to be addressed first.

Consolidates both development and operations reporting for all relevant teams.

Raygun APM can be applied to other DevOps tools like Jenkins to track development at every level.

28. Plutora

Plutora has been dubbed as one of the most complete VSM platforms out there. A VSM (Value Stream Management) tool that’s designed to give you everything you need to scale DevOps throughout your organization. Plutora lets you set up a map to visualize all of your value streams, allowing you to take data from all of your critical systems.

Contains governance & compliance features that ensure policy adherence for every process

29. Loom Systems

Loom Systems calls upon artificial intelligence and machine learning to help prevent problems in organizations. It does this by predicting what issues may arise, so developers can take steps to stop them from happening.

The core of Loom Systems is ‘Sophie’ – who is essentially your virtual IT assistant. She gives you ideas based on any detected issues as soon as they’re detected. She can also manage your feedback by learning from what went wrong and automatically improving things.

Sophie is currently the only system in the industry that can accurately predict IT issues before they create a negative impact on customers while providing solutions in easy-to-understand terms.

It’s suggested that around 42% of P1 incidents are predicted using Loom Systems

Loom can boost business productivity by adding automation

Provide you with more time to focus on other essential DevOps tasks

30. Vagrant

This DevOps tool is built around the concept of automation. It can be used in conjunction with other management tools on this list, and it lets you create virtual machine environments all in the same workflow.

By doing this, it gives the entire DevOps team a better environment to continue with development. There’s a shorter set-up time for the development environment, which improves productivity as well.

Many companies have started using Vagrant to help transition into the DevOps culture.

Vagrant is compatible with various operating systems, including Windows, Mac, and Linux

Can be used and integrated with Puppet, Ansible, Chef, and more

31. Prometheus

Prometheus is a service monitoring system that helps to power your metrics and alerting. It does this by using a highly dimensional data model, along with powerful queries.

One of the great things about Prometheus is that you can visualize data in a variety of ways. As such, this makes analyzing data far easier for everyone involved.

Plus, you can export data from third-party solutions into Prometheus, which essentially means it works with different DevOps tools, such as Docker.

Custom libraries that are easy for you to implement

A very flexible query language

32. Chef

Chef is all about improving your DevOps processes and making life far easier for you. The main focus is on increasing the speed and consistency of tasks, while also enabling you to scale them with relative ease.

The exciting thing about Chef is that it’s a cloud-based system, which means you can access it from any device, whenever you want. One of the drawbacks of cloud systems is that they might be unavailable due to server issues. However, Chef is found to maintain a high level of availability.

With this tool, you can make complicated tasks far easier by calling on automation to carry out different jobs and free up your own time.

Helps to control your infrastructure

Is used by big companies like Facebook and Etsy

Collaboration & Planning Tools

For many software companies, Git is the go-to solution for managing remote teams.

Git is used for tracking a team’s progress on a particular project, saving multiple versions of the source code along the way. Organizations can develop branching versions of the code to experiment without compromising the entire project.

Git requires a hosted repository. The obvious choice is Github, although competitor Bitbucket has much to offer. Bitbucket offers free unlimited private repos for up to five members of a team.

Slack can be integrated with either GitHub or Bitbucket.

Separate branches of source code can be merged through Git.

Source code management tools like Git are necessary for the modern software development field. In that niche, Git stands as the leader.

Slack enables your team to the opportunity to communicate and collaborate all on one platform.

Valuable information can quickly and easily be shared with everyone involved in a specific project on message boards.

Channels can be set up by topic, team, project, or however else you see fit. When information from the conversation is needed, there is a search option that allows for easy access. Slack is compatible with many services and apps you are already using.

NPM assists organizational efforts by simultaneously reducing risk and internal friction. It consolidates resources under a single sign-on to manage user access and permissions. This helps to support operations which depend on structured flows.

NPM is open source.

Interacts with the world’s largest software registry.

NPM has 100% parity with public registry features which are in high demand today.

39. GitKraken

In addition to advanced cross-platform functionality, GitKraken is reportedly a pleasure to use. It is designed with a fast learning curve in mind.

This intuitive GUI client is consistent and reliable. It is a version control system which goes beyond basic software development. Power is merged with ease-of-use through features like quickly viewable information via hovering.

GitKraken is available on Windows, Mac OS, Ubuntu, and Debian.

Built on Electron, an open-source framework.

A free version is available.

Among its capabilities are pushing, branching, merging, and rebasing.

GitKraken is independently developed.

40 Visual Studio

Visual Studio is a Microsoft product. It is an integrated development environment (IDE). Visual Studio has applications for both the web and computer programs.

The broad spectrum of web uses includes websites and associated apps, services, as well as, mobile technology. It is considered a go-to, best-in-class solution.

Visual Studio’s Live Share offers benefits beyond Microsoft platforms. It is available for developers and services on any platform and in any language. Both native and managed code can be used.

Planning

41. GitLab

GitLab is an internal management solution for git repositories. It offers advantages for the DevOps lifecycle via a web-based engine.

The complete software lifecycle comes under a single application. Starting with project planning and source code management, GitLab extends to the CI/CD pipeline, monitoring, and security. The result is a software lifecycle that is twice as fast.

GitLab established features include planning, creation, management, verification, packaging, release, configuring, monitoring, security, and defense. Its defend feature is being introduced in 2019. All of the other features have updates and/or expanded functions in the works for 2019.

42. Trello

Trello is a DevOps collaboration tool that helps improve the organization of your projects. By using this tool, it helps you get more done by prioritizing projects and improving teamwork.

You can set up different teams and create tasks for everyone to carry out. This ensures that all team members are on the same page and know what they have to do – and what’s essential for them.

Trello allows everyone to interact and communicate with one another on one straightforward and intuitive platform.

Highly flexible, meaning you can use Trello however you see fit

Integrates a range of third-party apps that your team already uses

Keeps your team in-sync across all devices

Continuous Feedback

43. Mouseflow

This is very much a DevOps tool that’s built around the idea of continuous feedback from the customer. It won’t deliver surveys or direct words of feedback, but it does let you see how customers react.

Mouseflow uses heatmaps, so you see where all of your visitors are going on your website, and what they’re doing. It’s a genius way of figuring out where the positive and negative aspects of your site lie.

With this tool, you can unlock analytics data that helps you understand why people are possibly leaving your site/application, allowing you to make changes to address this.

Very easy to use and works on all web browsers

Contains a Form Analytics feature to see why visitors leave online forms

There’s no better way to understand what your customers are thinking than asking them.

SurveyMonkey allows you to do that along with providing several other operations including researching, obtaining new ideas, and analyzing the performance of your business.

Continuous feedback is how to uncover what your clients are expecting from you. Not only can you survey your customers, but you can also use it to find out what your employees are thinking about how things are working within the company.

Tracking, obtaining, managing, and addressing customer requests are possible through Jira Service Desk.

It’s where customers can go to ask for help or fill out various forms so that you can get to the bottom of any issues and improve the overall experience of your project so that the people are getting what they want.

Service requests are automatically organized and prioritized by importance with the Jira Service Desk tool.

Your employees can work through the requests quickly to resolve issues more efficiently. When there are critical submissions, an alert will come through ensuring that you don’t miss anything.

You can also create a resource knowledge base that your clients can use to answer their own questions.

46. SurveyGizmo

This is another feedback tool that works similarly to SurveyMonkey. You can invite people to respond to your surveys and gain a lot of constant information from your customers.

There are many different ways you can construct a survey and select the questions you want to include. With this tool, you’re empowered to make smarter decisions based on the research you generate. There are great segmentation and filtering features that help you find out what’s good and bad about your product.

Plus, the surveys look more appealing to potential customers. This could ensure that more people are willing to fill them in.

Offers quick and easy survey configuration

Can correlate feedback to positive and negative experiences for a simple overview

Issue Tracking

Mantis Bug Tracker provides the ability to work with clients and team members in an efficient, simple, and professional manner.

It’s a practical option for clearing up issues quickly while maintaining a balance of power and simplicity. You have the option of customizing the categories of problems along with workflows and notifications. Get emails sent when there are problems that need to be resolved right away.

You maintain control of your business while allowing specific users access to what you want them to be able to get to.

48. WhiteSource Bolt

Security is a critical concern in DevOps.

With WhiteSource Bolt, you have an open source security tool that helps you zone in on any security issues and fix them right away.

It’s a free tool to use, and you can use it within Azure or GitHub as well. The main aim of the tool is to give you alerts in real-time that show all of your security vulnerabilities. It then gives you some suggested fixes that you can act upon to sure up security and remove the weakness.

Supports well over 200 different programming languages

Provides up to 5 scans per day

Can scan any number of public and private repositories

49. Snort

Snort is another security tool for DevOps that works to protect a system from intruders and attacks.

This is considered one of the most powerful open-source tools around, and you can analyze traffic in real-time. By doing so, it makes intruder detection far more efficient and fast. Snort also can flag up any aggressive attacks against your system.

There are over 600,000 registered users on the Snort platform right now, making it the most widely deployed intrusion prevention system out there.

Packet logging and analysis provides signature-based attack detection

Performs protocol analysis and content searching

Has the ability to detect and flag up a variety of different attacks

50. OverOps

Code breaks are part and parcel of the DevOps life. OverOps is a tool that’s useful at identifying any breaks in your code during the production process.

Not only that, but it gets down to the root cause of an issue and informs you why there was a code break and exactly when it happened. You’ll be faced with a complete picture of the code when the abnormality was detected, so you can reproduce and fix the code.

Integrates with Jenkins

Stops you from promoting bad code

Uses Artificial Intelligence to spot any new issues in real-time

51. Code Climate

Code Climate is one of the top issue tracking tools for DevOps professionals. With this software, you get a detailed analysis of how healthy your code is. You can see everything from start to finish, which lets you pinpoint any issues.

DevOps professionals can easily see any problems in a line of code and fix them as soon as possible. Therefore, you can start producing better code with fewer errors and bugs – which will only improve the overall customer experience upon launch.

Zendesk works for companies of all sizes by improving customer service and support.

Choose from hundreds of applications, or use the features as is. Your development team can even build a completely customized tool taking the open APIs offered on the Apps Marketplace.
Zendesk provides benchmark data access across your industry. This is valuable data to improve your customer interactions.

In Closing

When you integrate DevOps early in software development, you are streamlining the process. Anyone looking to create and deliver a software program more quickly and efficiently than the more traditional methods can utilize these applications.

Decide which applications above are most useful for your needs and start developing more efficiently today!

What reviews say

As evident by now, Dobby is a selfie drone and does not come with any kind of remote controller/transmitter and is purely operated with the app – the control distance is 100 meters in open area with no obstacles, when the drone goes beyond the operating frequency, it auto returns itself.

The front of the Dobby Drone has a single camera that can be set into one of four positions before flying. The compact design means a motorized camera [on a gimbal] isn’t viable, but the small size of the drone means you can get in tighter and closer than on a larger UAV.

Sure, Parrot’s family of MiniDrones start at around $150, but the cameras on those are almost unusable. Video quality on the Dobby isn’t exactly amazing either, but it’s far from terrible. I’d say it’s a step or two down from the quality you’d get on a flagship smartphone these days, but more than adequate for social media. However, I found images were occasionally a bit soft in terms of focus. Depending on the light that you’re shooting in, you might see exposure change mid-flight, which can be a bit jarring.

As fun as Dobby is to play with, it’s let down by a comparatively short battery life; each battery will only give you about nine minutes per charge. This is cut shorter due to the fact the drone automatically lands itself when you’ve got about 15 percent charge left, presumably to prevent it from suddenly falling out of the sky. In some cases, we only got about five minutes of flight time before we had to stop flying and swap battery.

What users say

With 15 reviews on Zerotech’s Amazon page, this drone now has an average rating of 4.5 out of 5.

D. Davis gives it a full score, saying:

Perfect, stable, and convenient. I’m really pleased with this drone. It feels like everything has been thought out. The GPS and other ways that the drone stays locked in place are very good. Best of all, it fits in my pocket easily and I can use my cellphone (always on me anyway) to control it.

An unnamed customer adds:

Done a lot of research and compared it with Yuneec, Hover, and other small camera drones, but I finally decided to buy Dobby. (Mavic is great but it is kind of for professional users.) Pocket-sized and GPS positioning for outdoor flying are the two things attract me most.

It’s really good for family aerial selfies. The auto pull away short video is kind of magic especially on sunny days.

Evan G, meanwhile, laments the sizable price tag:

$400 for a drone this small? Tsk tsk. I was hoping to see this maybe in the $100 to $150 range, but $400? That’s the price of a brand new PS4 on release day.

Yuneec Breeze

Indoor positioning system helps it fly inside where GPS might be limited

Comes with two batteries

Flight time: 12 minutes

Photo credit: Yuneec.

Yuneec is making a name for itself with a range of drones that perform well but cost less than half of DJI’s.

The Yuneec Breeze, much bigger than the Zerotech Dobby, weighs around 400g and isn’t so foldable – only the propellers tuck in. So you won’t be pocketing this thing. But it does boast 4K videos.

Despite the size difference, its purpose is the same as the Dobby, and many of the software features are similar.

What reviews say

The mobile app, available for iOS and Android, is split into two sections: Tasks and Gallery. Tap on Tasks and you’re given five options to choose from: Pilot, Selfie, Orbit, Journey, and Follow Me. Pilot has the manual controls for flying around the way any other drone would with a regular controller.

After crashing it a half dozen times, we were pleased to discover that it’s actually pretty damn durable. We bashed it (inadvertently) into all manner of obstacles — bushes, branches, tree trunks, and even the side of a car — but in none of those instances did the drone become so damaged that it couldn’t start right up again.

As far as we can tell, the key to the Breeze’s resilience is its clever hinged props. it appears that Yuneec outfitted the drone with this feature so that the props could be folded inward for easy transport, but based on our observations, it seems that they also help the drone recover from run-ins with tree branches quite effectively. Instead of breaking on contact, they pivot backward ever so slightly, which seems to prevent them from snapping.

I did find a tendency for it to drift around disconcertingly on occasion, forcing me at one point to swiftly hit the land button before a nearby rosebush got an unscheduled trimming. And, as with most drones, GPS means you can simply tap a “return-to-home” button when you want to bring the Breeze automatically back to its take-off point.

What users say

I’ve found the Yuneec Breeze to be an excellent product in nearly all regards. While it may not be the most advanced, fastest or highest resolution camera, it flies well, is stable and easy to control with an iOS device. As a heavy Canon user with a 51 mpx DSLR, in my opinion the photos are of excellent quality. I have played with the video and found it to be excellent as well but video is not my forte so I may not be the best judge. The Breeze folds up and fits into its hard plastic case, making it easy to transport even in a hiking day pack.

Shannon S finds it to be a “good product with some noticeable benefits and drawbacks.”

The video camera is not great in low light. Without a gimbal there’s just too much blur. There is slight blur when panning on video even in daylight. This thing is surprisingly stable in the air, so one thing you obviously want to do is send it up high and take a panoramic video of your surroundings.

Creativety found the indoor flying an eye-opener:

I can easily use it indoors, which, initially scared me, but sure enough it was much easier than I imagined.

DJI Mavic Pro

Stealthy in black

Top speed: 64 kph

Flight time: 27 minutes

Photo credit: DJI.

Like with the MackBook Pro, the “pro” in the name means this is serious business – and entails an astonishing price tag.

Still, it’s essentially as easy to fly as the others on this list.

It’s no flyweight at 720g, but it’s nimbler than DJI’s other offerings – and remember it’s foldable. The range on this thing is amazing – up to 7km, if you dare go that far with the battery life.

What reviews say

Assuming you’re using the controller and not just the smartphone, you’ll do most of your flying with the control sticks, while you’ll manipulate the drone’s more advanced settings through your phone. The Mavic Pro flies smoothly and is pleasantly easy to maneuver. When you take your fingers off the sticks, it hovers steadily in place (we tested it on a day with almost no wind, it’s unclear how the Mavic might do on a blustery day). That’s especially helpful for lining up precise shots.

The camera and gimbal are very similar to what you find on the Phantom, only smaller. The camera uses the same sensor, shooting 4K video and 12 megapixel stills. The only difference is that the Mavic Pro doesn’t have as wide a field of view as the Phantom. The Mavic Pro does have the same forward-facing optical sensors as the Phantom 4, though, allowing it to detect obstacles and autonomously avoid crashes.

While it might not have the power to cut through really strong winds (DJI says it can handle winds up to 19-24 mph or 29-38 kph), it was able to keep the camera stable and fly steady in 10-15 mph winds and still get between 22-25 minutes of flight time before it landed itself. It does warn you when the winds are too strong for its motors, too.

What users say

The newest of this trio, it scores 4.1 out of 5 from 13 reviews.

Squatch LOVES Milo enthuses that it “does everything the Phantom 4 does but in a much smaller package.”

I pulled the Mavic out of the box and immediately realized how little this thing is compared to my Phantoms. Its size is definitely going to be the draw for most people (it’s small enough to fit in a shoulder camera bag, making it way more portable and comfortable to tote around than my Phantom 4). The four arms are all folded up into the body, making it about the size of a water bottle (slightly bigger) in its portable state.

Good Amazon Customer, an experienced RC hobbyist and drone pilot, is a convert.

I took a couple flights in high winds – 18 mph [29 kph] – and it worked perfectly. The camera is tiny, but of very high quality, probably equal to or better than most drones on the market (I think the Phantom 4 cam may be slightly better though).

That buyer adds some advice for newbies:

Word of warning though, these are not for beginners. No expensive camera drone is. Do your homework and spend time learning on smaller drones. Maybe buy a lower-priced camera drone and learn the ropes. Then get a Mavic Pro.

Many of the big company APIs are online only. WordNet can be downloaded and used offline.

WordNet is many times more powerful that any other dictionary or thesaurus out there.

The last point requires some explanation.

WordNet is not like your everyday dictionary. While a traditional dictionary features a list of words and their definitions, WordNet focuses on the relationship between words (in addition to definitions). The focus on relationships makes WordNet a network instead of a list. You might have guessed this already from the name WordNet.

WordNet is a network of words!

In the WordNet network, the words are connected by linguistic relations. These linguistic relations (hypernym, hyponym, meronym, pertainym and other fancy sounding stuff), are WordNet’s secret sauce. They give you powerful capabilities that are missing in ordinary dictionaries/thesauri.

We will not go deep into linguistics in this article because that is besides the point. But I do want to show you what you can achieve in your code using WordNet. So let’s look at the two most common use cases (which any dictionary or thesaurus should be able to do) and some advanced use cases (which only WordNet can do) with example code.

Common use cases

Word lookup

Let’s start with the simplest use case i.e word lookups. We can look up the meaning of the any word in WordNet in three lines of code (examples are in Python).

```python
### checking the definition of the word "hacker"
# import the NLTK wordnet interface
>>> from nltk.corpus import wordnet as wn
# lookup the word
>>> hacker = wn.synset(“hacker.n.03”)
>>> print(hacker.definition())
a programmer for whom computing is its own reward;
may enjoy the challenge of breaking into other
computers but does no harm
```

Synonym and Antonym lookup

WordNet can function as a thesaurus too, making it easy to find synonyms and antonyms. To get the synonyms of the word beloved, for instance, I can type the following line in Python…

… and get the synonyms dear, dearest, honey and love, as expected. Antonyms can be obtained just as simply.

Advanced use cases

Cross Part of Speech lookup

WordNet can do things that dictionaries/thesauri can’t. For example, WordNet knows about cross Part of Speech relations. This kind of relation connects a noun (e.g. president) with its derived verb (preside), derived adjective (presidential) and derived adverb (presidentially). The following snippet displays this functionality of WordNet (using a WordNet based Python package called word_forms).

Being able to generate these relations is particularly useful for Natural Language Processing and for English learners.

Classification lookup

In addition to being a dictionary and thesaurus, WordNet is also a taxonomical classification system. For instance, WordNet classifies dog as a domestic animal, a domestic animal as an animal, and an animal as an organism. All words in WordNet have been similarly classified, in a way that reminds me of taxonomical classifications in biology.

The following snippet shows what happens if we follow this chain of relationships till the very end.

```python
### follow hypernym relationship recursively till the end
# define a function that prints the next hypernym
# recursively till it reaches the end
>>> def get_parent_classes(synset):
… while True:
… try:
… synset = synset.hypernyms()[-1]
… print(synset)
… except IndexError:
… break
…
# find the hypernyms of the word "dog"
>>> dog = wn.synset(“dog.n.01”)
>>> get_parent_classes(dog)
Synset(‘domestic_animal.n.01’) # dog is a domestic animal
Synset(‘animal.n.01’) # a domestic animal is an animal
Synset(‘organism.n.01’) # an animal is an organism
Synset(‘living_thing.n.01’) # an organism is a living thing
Synset(‘whole.n.02’) # a living thing is a whole
Synset(‘object.n.01’) # a whole is an object
Synset(‘physical_entity.n.01’) # an object is a physical entity
Synset(‘entity.n.01’) # a physical entity is an entity
```

To visualize the classification model, it is helpful to look at the following picture, which shows a small part of WordNet.

Image courtesy the original WordNet paper.

Semantic word similarity

The classification model of WordNet have been used for many useful applications. One such application computes the similarity between two words based on the distance between words in the WordNet network. The smaller the distance, the more similar the words. In this way, it is possible to quantitatively figure out that a cat and a dog are similar, a phone and a computer are similar, but a cat and a phone are not similar!

WordNet has comprehensive coverage of the English language. Currently, it has 155,287 English words. The complete Oxford English Dictionary has nearly the same number of modern words (171,476). WordNet was last updated in 2011. Some contemporary English words like bromance or chillax seems to be missing it in for this reason, but this should not be a deal breaker for most of us.

If you want to know more about WordNet, the following references are very helpful.

If we are using Linux and we need to recover data due to any of the reason whether physical damage or logical damage, we have many tools for this purpose of recovering data. To not to confuse with many, I will be discussing only one of the data recovery tools available for Linux. ….GNU ddrescue.

GNU ddrescue is a program that copies data from one file or block device (hard disk, cd/dvd-rom, etc) to another, it is a tool to help you to save data from crashed partition i.e. it is a data recovery tool. It tries to read and if it fails it will go on with the next sectors, where tools like dd will fail. If the copying process is interrupted by the user it is possible to continue at any position later. It can copy backwards.
This program is useful to rescue data in case of I/O errors, because it does not necessarily abort or truncate the output. This is why you need to use this program and not the dd command. I have recovered much data from many disks (CD/hard disk/software raid) over the years using GNU ddrescue on Linux. I highly recommend this tool to Linux sysadmins.

Example: Rescue/recover a DVD-ROM in /dev/dvdom on a Linux

Please note that if there are no errors (errsize is zero), dvd-image now contains a complete image of the DVD-ROM and you can write it to a blank DVD-ROM on a Linux basedsystem:# growisofs -Z /dev/dvdrom=/path/to/dvd-image

Example: Resume failed rescue

In this example, while rescuing the whole drive /dev/sda to /dev/sdb, /dev/sda freezes up at position XYZFOOBAR (troubled sector # 7575757542):

A note about dd_rescue command and syntax

On Debian / Ubuntu and a few other distro you end up installing other utility called dd_rescue. dd_rescue is a program that copies data from one file or block device to another, it is a tool to help you to save data from crashed partition.

Examples: dd_rescue

To make exact copy of /dev/sda (damaged) to /dev/sdb (make sure sdb is empty) you need to type following command:# ddrescue /dev/sda /dev/sdb
Naturally, next step is to run fsck on /dev/sdb partition to recover/save data. Remember do not touch originally damaged /dev/sda. If this procedure fails you can send your disk to professional data recovery service. For example if /home (user data) is on /dev/sda2, you need to run a command on /dev/sdb2:# fsck /dev/sdb2

Once fsck run, mount /dev/sdb2 somewhere and see if you can access the data:# mount /dev/sdb2 /mnt/data
Finally, take backup using tar or any other command of your own choice. ddrescue command supports tons of options, read man page for more information:# man dd_rescue
OR see gnu/ddrescue command man page:# man ddrescue

This post is a success story of one imaginary news portal, and you’re the happy owner, the editor, and the only developer. Luckily, you already host your project code on GitLab.com and know that you can run tests with GitLab CI. Now you’re curious if it can be used for deployment, and how far can you go with it.

To keep our story technology stack-agnostic, let’s assume that the app is just a set of HTML files. No server-side code, no fancy JS assets compilation.

Important detail: the command expects you to provide AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Also you might need to specify AWS_DEFAULT_REGION.

Let’s try to automate it using GitLab CI.

First Automated Deployment

With GitLab, there’s no difference on what commands to run. You can setup GitLab CI according to your needs as if it was your local terminal on your computer. As long as you execute commands there, you can tell CI to do the same for you in GitLab. Put your script to.gitlab-ci.yml and push your code – that’s it: CI triggers a job and your commands are executed.

Let’s add some context to our story: our website is small, there is 20-30 daily visitors and the code repository has only one branch:master.

Let’s start by specifying a job with the command from above in .gitlab-ci.yml:

It is our job to ensure that there is an aws executable. To install awscli we need pip, which is a tool for Python packages installation. Let’s specify Docker image with preinstalled Python, which should contain pip as well:

The installation of awscli extends the job execution time, but it is not a big deal for now. If you need to speed up the process, you can always look for a Docker image with preinstalled awscli, or create an image by yourself.

Also, let’s not forget about these environment variables, which you’ve just grabbed from AWS Console:

It should work, however keeping secret keys open, even in a private repository, is not a good idea. Let’s see how to deal with it.

Keeping Secret Things Secret

GitLab has a special place for secret variables: Settings > Variables

Whatever you put there will be turned into environment variables. Only an administrator of a project has access to this section.

We could remove variables section from our CI configuration. However, let’s use it for another purpose.

Specifying and Using Non-secret Variables

When your configuration gets bigger, it is convenient to keep some of the parameters as variables at the beginning of your configuration. Especially if you use them in multiple places. Although it is not the case in our situation yet, let’s set the S3 bucket name as a variable, for demonstration purposes:

Because the audience of your website grew, you’ve hired a developer to help you. Now you have a team. Let’s see how teamwork changes the workflow.

Dealing with Teamwork

Now there’s two of you working in the same repository. It is no longer convenient to use the master branch for development. You decide to use separate branches for both new features and new articles and merge them into master when they are ready.

The problem is that your current CI config doesn’t care about branches at all. Whenever you push anything to GitLab, it will be deployed to S3.

Preventing it is straightforward. Just add only: master to your deploy job.

You don’t want to deploy every branch to the production website

But it would also be nice to preview your changes from feature-branches somehow.

Setting Up a Separate Place for Testing

Patrick (the guy you recently hired) reminds you that there is such a thing called GitLab Pages. It looks like a perfect candidate for a place to preview your work in progress.

We specified two jobs. One job deploys the website for your customers to S3 (deploy). The other one (pages) deploys the website to GitLab Pages. We can name them “Production environment” and “Staging environment”, respectively.

All branches, except master, will be deployed to GitLab Pages

Introducing Environments

GitLab offers support for environments, and all you need to do it to specify the corresponding environment for each deployment job:

GitLab keeps track of your deployments, so you always know what is currently being deployed on your servers:

GitLab provides full history of your deployments per every environment:

Now, with everything automated and set up, we’re ready for the new challenges that are just around the corner.

Deal with Teamwork Part 2

It has just happened again. You’ve pushed your feature-branch to preview it on staging; a minute later Patrick pushed his branch, so the Staging was re-written with his work. Aargh!! It was the third time today!

Idea! Let’s use Slack to notify us of deployments, so that people will not push their stuff if another one has been just deployed!

Since the only thing you want to be notified of is deployments, you can uncheck all the checkboxes except the “Build” in the settings above. That’s it. Now you’re notified for every deployment:

Teamwork at Scale

As the time passed, your website became really popular, and your team has grown from 2 to 8 people. People develop in parallel, so the situation when people wait for each other to preview something on Staging has become pretty common. “Deploy every branch to staging” stopped working.

It’s time to modify the process one more time. You and your team agreed that if someone wants to see his/her changes on the staging server, he/she should first merge the changes to the “staging” branch.

The change of .gitlab-ci.yml is minimal:

except:-master

is now changed to

only:-staging

People have to merge their feature branches before preview on Staging

Of course, it requires additional time and effort for merging, but everybody agreed that it is better than waiting.

Handling Emergencies

You can’t control everything, so sometimes things go wrong. Someone merged branches incorrectly and pushed the result straight to production exactly when your site was on top of HackerNews. Thousands of people saw your completely broken layout instead of your shiny main page.

Luckily, someone found the Rollback button, so the website was fixed a minute after the problem was discovered.

Rollback relaunches the previous job with the previous commit

Anyway, you felt that you needed to react to the problem and decided to turn off auto deployment to production and switch to manual deployment. To do that, you needed to add when: manual to your job.

As you expected, there will be no automatic deployment to Production after that. To deploy manually go to Pipelines > Builds, and click the button:

Finally, your company has turned into a corporation. You have hundreds of people working on the website, so all the previous compromises are not working anymore.

Review Apps

The next logical step is to boot up a temporary instance of the application per feature-branch for review.

In our case, we set up another bucket on S3 for that. The only difference that we copy the contents of our website to a “folder” named by a name of the development branch, so that the URL looks like this:

The interesting thing is where we got this $CI_BUILD_REF_NAME variable from. GitLab predefines many environment variables so that you can use them in your jobs.

Note that we defined the S3_BUCKET_NAME variable inside the job. You can do this to rewrite top-level definitions.

Visual representation of this configuration:

The details of Review Apps implementation depend widely on your real technology stack and on your deployment process, which is out of the scope of this blog post.

It will not be that straightforward, as it is with our static HTML website. For example, you had to make these instances temporary, and booting up these instances with all required software and services automatically on the fly is not a trivial task. However, it is doable, especially if you use Docker, or at least Chef or Ansible.

We’ll cover deployment with Docker in another blog post. To be fair, I feel a bit guilty for simplifying the deployment process to a simple HTML files copying, and not adding some hardcore scenarios. If you need some right now, I recommend you to read article “Building an Elixir Release into a Docker image using GitLab CI”

For now, let’s talk about one final thing.

Deploying to Different Platforms

In real life, we are not limited to S3 and GitLab Pages. We host, and therefore, deploy our apps and packages to various services.

Moreover, at some point, you could decide to move to a new platform and thus need to rewrite all your deployment scripts. You can use a gem called dpl to minimize the damage.

In the examples above we used awscli as a tool to deliver code to an example service (Amazon S3). However, no matter what tool and what destination system you use, the principle is the same: you run a command with some parameters and somehow pass a secret key for authentication purposes.

Introduction

Cloud hosting is a method of using online virtual servers that can be created, modified, and destroyed on demand. Cloud servers are allocated resources like CPU cores and memory by the physical server that it’s hosted on and can be configured with a developer’s choice of operating system and accompanying software. Cloud hosting can be used for hosting websites, sending and storing emails, and distributing web-based applications and other services.

In this guide, we will go over some of the basic concepts involved in cloud hosting, including how virtualization works, the components in a virtual environment, and comparisons with other common hosting methods.

What is “the Cloud”?

“The Cloud” is a common term that refers to servers connected to the Internet that are available for public use, either through paid leasing or as part of a software or platform service. A cloud-based service can take many forms, including web hosting, file hosting and sharing, and software distribution. “The Cloud” can also be used to refer to cloud computing, which is the practice of using several servers linked together to share the workload of a task. Instead of running a complex process on a single powerful machine, cloud computing distributes the task across many smaller computers.

Other Hosting Methods

Cloud hosting is just one of many different types of hosting available to customers and developers today, though there are some key differences between them. Traditionally, sites and apps with low budgets and low traffic would use shared hosting, while more demanding workloads would be hosted on dedicated servers.

Shared hosting is the most common and most affordable way to get a small and simple site up and running. In this scenario, hundreds or thousands of sites share a common pool of server resources, like memory and CPU. Shared hosting tends to offer the most basic and inflexible feature and pricing structures, as access to the site’s underlying software is very limited due to the shared nature of the server.

Dedicated hosting is when a physical server machine is sold or leased to a single client. This is more flexible than shared hosting, as a developer has full control over the server’s hardware, operating system, and software configuration. Dedicated servers are common among more demanding applications, such as enterprise software and commercial services like social media, online games, and development platforms.

How Virtualization Works

Cloud hosting environments are broken down into two main parts: the virtual servers that apps and websites can be hosted on and the physical hosts that manage the virtual servers. This virtualization is what is behind the features and advantages of cloud hosting: the relationship between host and virtual server provides flexibility and scaling that are not available through other hosting methods.

Virtual Servers

The most common form of cloud hosting today is the use of a virtual private server, or VPS. A VPS is a virtual server that acts like a real computer with its own operating system. While virtual servers share resources that are allocated to them by the host, their software is well isolated, so operations on one VPS won’t affect the others.

Virtual servers are deployed and managed by the hypervisor of a physical host. Each virtual server has an operating system installed by the hypervisor and available to the user to add software on top of. For many practical purposes, a virtual server is identical in use to a dedicated physical server, though performance may be lower in some cases due to the virtual server sharing physical hardware resources with other servers on the same host.

Hosts

Resources are allocated to a virtual server by the physical server that it is hosted on. This host uses a software layer called a hypervisor to deploy, manage, and grant resources to the virtual servers that are under its control. The term “hypervisor” is often used to refer to the physical hosts that hypervisors (and their virtual servers) are installed on.

The host is in charge of allocating memory, CPU cores, and a network connection to a virtual server when one is launched. An ongoing duty of the hypervisor is to schedule processes between the virtual CPU cores and the physical ones, since multiple virtual servers may be utilizing the same physical cores. The method of choice for process scheduling is one of the key differences between different hypervisors.

Hypervisors

There are a few common hypervisor software available for cloud hosts today. These different virtualization methods have some key differences, but they all provide the tools that a host needs to deploy, maintain, move, and destroy virtual servers as needed.

KVM, short for “Kernel-Based Virtual Machine”, is a virtualization infrastructure that is built in to the Linux kernel. When activated, this kernel module turns the Linux machine into a hypervisor, allowing it to begin hosting virtual servers. This method is in contrast from how other hypervisors usually work, as KVM does not need to create or emulate kernel components that are used for virtual hosting.

Xen is one of the most common hypervisors in use today. Unlike KVM, Xen uses a microkernel, which provides the tools needed to support virtual servers without modifying the host’s kernel. Xen supports two distinct methods of virtualization: paravirtualization, which skips the need to emulate hardware but requires special modifications made to the virtual servers’ operating system, and hardware-assisted virtualization, which uses special hardware features to efficiently emulate a virtual server so that they can use unmodified operating systems.

ESXi is an enterprise-level hypervisor offered by VMware. ESXi is unique in that it doesn’t require the host to have an underlying operating system. This is referred to as a “type 1” hypervisor and is extremely efficient due to the lack of a “middleman” between the hardware and the virtual servers. With type 1 hypervisors like ESXi, no operating system needs to be loaded on the host because the hypervisor itself acts as the operating system.

Hyper-V is one of the most popular methods of virtualizing Windows servers and is available as a system service in Windows Server. This makes Hyper-V a common choice for developers working within a Windows software environment. Hyper-V is included in Windows Server 2008 and 2012 and is also available as a stand-alone server without an existing installation of Windows Server.

Why Cloud Hosting?

The features offered by virtualization lend themselves well to a cloud hosting environment. Virtual servers can be configured with a wide range of hardware resource allocations, and can often have resources added or removed as needs change over time. Some cloud hosts can move a virtual server from one hypervisor to another with little or no downtime or duplicate the server for redundancy in case of a node failure.

Customization

Developers often prefer to work in a VPS due to the control that they have over the virtual environment. Most virtual servers running Linux offer access to the root (administrator) account or sudo privileges by default, giving a developer the ability to install and modify whatever software they need.

This freedom of choice begins with the operating system. Most hypervisors are capable of hosting nearly any guest operating system, from open source software like Linux and BSD to proprietary systems like Windows. From there, developers can begin installing and configuring the building blocks needed for whatever they are working on. A cloud server’s configurations might involve a web server, database, email service, or an app that has been developed and is ready for distribution.

Scalability

Cloud servers are very flexible in their ability to scale. Scaling methods fall into two broad categories: horizontal scaling and vertical scaling. Most hosting methods can scale one way or the other, but cloud hosting is unique in its ability to scale both horizontally and vertically. This is due to the virtual environment that a cloud server is built on: since its resources are an allocated portion of a larger physical pool, it’s easy to adjust these resources or duplicate the virtual image to other hypervisors.

Horizontal scaling, often referred to as “scaling out”, is the process of adding more nodes to a clustered system. This might involve adding more web servers to better manage traffic, adding new servers to a region to reduce latency, or adding more database workers to increase data transfer speed. Many newer web utilities, like CoreOS, Docker, and Couchbase, are built around efficient horizontal scaling.

Vertical scaling, or “scaling up”, is when a single server is upgraded with additional resources. This might be an expansion of available memory, an allocation of more CPU cores, or some other upgrade that increases that server’s capacity. These upgrades usually pave the way for additional software instances, like database workers, to operate on that server. Before horizontal scaling became cost-effective, vertical scaling was the method of choice to respond to increasing demand.

With cloud hosting, developers can scale depending on their application’s needs — they can scale out by deploying additional VPS nodes, scale up by upgrading existing servers, or do both when server needs have dramatically increased.

Conclusion

By now, you should have a decent understanding of how cloud hosting works, including the relationship between hypervisors and the virtual servers that they are responsible for, as well as how cloud hosting compares to other common hosting methods. With this information in mind, you can choose the best hosting for your needs.

Almost all apps will need to store data of some form. Maybe you need to save user preferences, progress in a game, or offline data so your app can work without a network connection. Developers have a lot of options for managing data in iOS apps, from Core Data to cloud based storage, but one elegant and reliable local storage option is SQLite.

In this tutorial I will show you how to add SQLite support to your app. You can find the final source code on GitHub.

Getting Started

The SQLite library is written in C, and all queries happen as calls to C functions. This makes it challenging to use, as you have to be aware of pointers and data types etc. To help, you can make use of Objective-C or Swift wrappers to serve as an adapter layer.

A popular choice is FMDB, an Objective-C wrapper around SQLite. Its easy to use, but personally I prefer to not use hard-coded SQL (Structured Query Language) commands. For this tutorial, I will use SQLite.swiftto create a basic contact list.

First, create a new single view project in Xcode (SQLite.swift requires Swift 2 and Xcode 7 or greater). I created a ViewController in Main.storyboard that looks like the below. Create your own similar layout, ordownload the storyboard files here.

The last function returns a specific UITableViewCell for each row. First get the cell using the identifier, then its child views using their tag. Make sure that the identifiers match your element names.

Here you take the values of the UITextFields, and create an object which is added to the contacts list. The id is set to 0, since you haven’t implemented the database yet. The functioninsertRowsAtIndexPaths() takes as arguments an array of indexes of the rows that will be affected, and the animation to perform with the change.

The <- operator assigns values to the corresponding columns as you would in a normal query. The runmethod will execute these queries and statements. The id of the row inserted is returned from the method.

If you want to undertake further debugging you can use a method instead. The prepare method returns a list of all the rows in the specified table. You loop through these rows and create an array of Contactobjects with the column content as parameters. If this operation fails, an empty list is returned.

Run the app and try to perform some actions. Below are two screenshots of how it should look. To update or delete a contact it must first be selected.

Any Queries?

SQLite is a good choice for working with local data, and is used by many apps and games. Wrappers like SQLite.swift make the implementation easier by avoiding the use of hardcoded SQL queries. If you need to store data in your app and don’t want to have to handle more complex options then SQLite i worth considering.

Memcached or Redis? It’s a question that nearly always arises in any discussion about squeezing more performance out of a modern, database-driven Web application. When performance needs to be improved, caching is often the first step taken, and Memcached or Redis are typically the first places to turn.

These renowned cache engines share a number of similarities, but they also have important differences. Redis, the newer and more versatile of the two, is almost always the superior choice.

The similarities

Let’s start with the similarities. Both Memcached and Redis serve as in-memory, key-value data stores, although Redis is more accurately described as a data structure store. Both Memcached and Redis belong to the NoSQL family of data management solutions, and both are based on a key-value data model. They both keep all data in RAM, which of course makes them supremely useful as a caching layer. In terms of performance, the two data stores are also remarkably similar, exhibiting almost identical characteristics (and metrics) with respect to throughput and latency.

Both Memcached and Redis are mature and hugely popular open source projects. Memcached was originally developed by Brad Fitzpatrick in 2003 for the LiveJournal website. Since then, Memcached has been rewritten in C (the original implementation was in Perl) and put in the public domain, where it has become a cornerstone of modern Web applications. Current development of Memcached is focused on stability and optimizations rather than adding new features.

Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo remains the lead developer of the project today. Redis is sometimes described as “Memcached on steroids,” which is hardly surprising considering that parts of Redis were built in response to lessons learned from using Memcached. Redis has more features than Memcached and is, thus, more powerful and flexible.

Used by many companies and in countless mission-critical production environments, both Memcached and Redis are supported by client libraries in every conceivable programming language, and it’s included in a multitude of packages for developers. In fact, it’s a rare Web stack that does not include built-in support for either Memcached or Redis.

Why are Memcached and Redis so popular? Not only are they extremely effective, they’re also relatively simple. Getting started with either Memcached or Redis is considered easy work for a developer. It takes only a few minutes to set up and get them working with an application. Thus, a small investment of time and effort can have an immediate, dramatic impact on performance — usually by orders of magnitude. A simple solution with a huge benefit; that’s as close to magic as you can get.

When to use Memcached

Because Redis is newer and has more features than Memcached, Redis is almost always the better choice. However, Memcached could be preferable when caching relatively small and static data, such as HTML code fragments. Memcached’s internal memory management, while not as sophisticated as that of Redis, is more efficient in the simplest use cases because it consumes comparatively less memory resources for metadata. Strings (the only data type supported by Memcached) are ideal for storing data that’s only read, because strings require no further processing.

That said, Memcached’s memory management efficiency diminishes quickly when data size is dynamic, at which point Memcached’s memory can become fragmented. Also, large data sets often involve serialized data, which always requires more space to store. While Memcached is effectively limited to storing data in its serialized form, the data structures in Redis can store any aspect of the data natively, thus reducing serialization overhead.

The second scenario in which Memcached has an advantage over Redis is in scaling. Because Memcached is multithreaded, you can easily scale up by giving it more computational resources, but you will lose part or all of the cached data (depending on whether you use consistent hashing). Redis, which is mostly single-threaded, can scale horizontally via clustering without loss of data. Clustering is an effective scaling solution, but it is comparatively more complex to set up and operate.

When to use Redis

You’ll almost always want to use Redis because of its data structures. With Redis as a cache, you gain a lot of power (such as the ability to fine-tune cache contents and durability) and greater efficiency overall. Once you use the data structures, the efficiency boost becomes tremendous for specific application scenarios.

Redis’ superiority is evident in almost every aspect of cache management. Caches employ a mechanism called data eviction to make room for new data by deleting old data from memory. Memcached’s data eviction mechanism employs a Least Recently Used algorithm and somewhat arbitrarily evicts data that’s similar in size to the new data.

Redis, by contrast, allows for fine-grained control over eviction, letting you choose from six different eviction policies. Redis also employs more sophisticated approaches to memory management and eviction candidate selection. Redis supports both lazy and active eviction, where data is evicted only when more space is needed or proactively. Memcached, on the other hand, provides lazy eviction only.

Redis gives you much greater flexibility regarding the objects you can cache. While Memcached limits key names to 250 bytes and works with plain strings only, Redis allows key names and values to be as large as 512MB each, and they are binary safe. Plus, Redis has five primary data structures to choose from, opening up a world of possibilities to the application developer through intelligent caching and manipulation of cached data.

Beyond caching

Using Redis data structures can simplify and optimize several tasks — not only while caching, but even when you want the data to be persistent and always available. For example, instead of storing objects as serialized strings, developers can use a Redis Hash to store an object’s fields and values, and manage them using a single key. Redis Hash saves developers the need to fetch the entire string, deserialize it, update a value, reserialize the object, and replace the entire string in the cache with its new value for every trivial update — that means lower resource consumption and increased performance.

Other data structures offered by Redis (such as lists, sets, sorted sets, hyperloglogs, bitmaps, and geospatial indexes) can be used to implement even more complex scenarios. Sorted sets for time-series data ingestion and analysis is another example of a Redis data structure that offers enormously reduced complexity and lower bandwidth consumption.

Another important advantage of Redis is that the data it stores isn’t opaque, so the server can manipulate it directly. A considerable share of the 180-plus commands available in Redis are devoted to data processing operations and embedding logic in the data store itself via server-side Lua scripting. These built-in commands and user scripts give you the flexibility of handling data processing tasks directly in Redis without having to ship data across the network to another system for processing.

Redis offers optional and tunable data persistence designed to bootstrap the cache after a planned shutdown or an unplanned failure. While we tend to regard the data in caches as volatile and transient, persisting data to disk can be quite valuable in caching scenarios. Having the cache’s data available for loading immediately after restart allows for much shorter cache warm-up and removes the load involved in repopulating and recalculating cache contents from the primary data store.

Data replication too

Redis can also replicate the data that it manages. Replication can be used for implementing a highly available cache setup that can withstand failures and provide uninterrupted service to the application. A cache failure falls only slightly short of application failure in terms of the impact on user experience and application performance, so having a proven solution that guarantees the cache’s contents and service availability is a major advantage in most cases.

Last but not least, in terms of operational visibility, Redis provides a slew of metrics and a wealth of introspective commands with which to monitor and track usage and abnormal behavior. Real-time statistics about every aspect of the database, the display of all commands being executed, the listing and managing of client connections — Redis has all that and more.

When developers realize the effectiveness of Redis’ persistence and in-memory replication capabilities, they often use it as a first-responder database, usually to analyze and process high-velocity data and provide responses to the user while a secondary (often slower) database maintains a historical record of what happened. When used in this manner, Redis can also be ideal for analytics use cases.

Redis for analytics

Three analytics scenarios come immediately to mind. In the first scenario, when using something like Apache Spark to iteratively process large data sets, you can use Redis as a serving layer for data previously calculated by Spark. In the second scenario, using Redis as your shared, in-memory, distributed data store canaccelerate Spark processing speeds by a factor of 45 to 100. Finally, an all too common scenario is one in which reports and analytics need to be customizable by the user, but retrieving data from inherently batch data stores (like Hadoop or an RDBMS) takes too long. In this case, an in-memory data structure store such as Redis is the only practical way of getting submillisecond paging and response times.

When using extremely large operational data sets or analytics workloads, running everything in-memory might not be cost effective. To achieve submillisecond performance at lower cost, Redis Labs created a version of Redis that runs on a combination of RAM and flash, with the option to configure RAM-to-flash ratios. While this opens up several new avenues to accelerate workload processing, it also gives developers the option to simply run their “cache on flash.”

Open source software continues to provide some of the best technologies available today. When it comes to boosting application performance through caching, Redis and Memcached are the most established and production-proven candidates. However, given its richer functionality, more advanced design, many potential uses, and greater cost efficiency at scale, Redis should be your first choice in nearly every case.

You know that Linux is a hot data center server. You know it can save you money in licensing and maintenance costs. But that still leaves the question of what your best options are for Linux as a server operating system.

We have listed the top Linux Server distributions based on the following characteristics:

Ease of installation and use

Cost

Available commercial support

Data center reliability

Ubuntu

At the top of almost every Linux-related list, the Debian-based Ubuntu is in a class by itself. Canonical’s Ubuntu surpasses all other Linux server distributions — from its simple installation to its excellent hardware discovery to its world-class commercial support, Ubuntu sets a strong standard that is hard to match.

The latest release of Ubuntu, Ubuntu 16.04 LTS “Xenial Xerus,” debuted in April 2016 and ups the ante with OpenStack Mitaka support, the LXD pure-container hypervisor, and Snappy, an optimized packaging system developed specifically for working with newer trends and technologies such as containers, mobile and the Internet of Things (IoT).

The LTS in Ubuntu 16.04 LTS stands for Long Term Support. The LTS versions are released every two years and include five years of commercial support for the Ubuntu Server edition.

Red Hat Enterprise Linux

While Red Hat started out as the “little Linux company that could,” its Red Hat Enterprise Linux (RHEL) server operating system is now a major force in the quest for data center rackspace. The Linux darling of large companies throughout the world, Red Hat’s innovations and non-stop support, including ten years of support for major releases, will keep you coming back for more.

RHEL is based on the community-driven Fedora, which Red Hat sponsors. Fedora is updated more frequently than RHEL and serves as more of a bleeding-edge Linux distro in terms of features and technology, but it doesn’t offer the stability or the length and quality of commercial support that RHEL is renowned for.In development since 2010, Red Hat Enterprise Linux 7 (RHEL 7) made its official debut in June 2014, and the major update offers scalability improvements for enterprises, including a new filesystem that can scale to 500 terabytes, as well as support for Docker container virtualization technology. The most recent release of RHEL, version 7.2, arrived in November 2015.

SUSE Linux Enterprise Server

The Micro Focus-owned (but independently operated) SUSE Linux Enterprise Server (SLES) is stable, easy to maintain and offers 24×7 rapid-response support for those who don’t have the time or patience for lengthy troubleshooting calls. And the SUSE consulting teams will have you meeting your SLAs and making your accountants happy to boot.

Similar to how Red Hat’s RHEL is based on the open-source Fedora distribution, SLES is based on the open-source openSUSE Linux distro, with SLES focusing on stability and support over leading-edge features and technologies.The most recent major release, SUSE Linux Enterprise Server 12 (SLES 12), debuted in late October 2014 and introduced new features like framework for Docker, full system rollback, live kernel patching enablement and software modules for “increasing data center uptime, improving operational efficiency and accelerating the adoption of open source innovation,” according to SUSE.SLES 12 SP1 (Service Pack 1) followed the initial SLES 12 release in December 2015, and added support for Docker, Network Teaming, Shibboleth and JeOS images.

CentOS

If you operate a website through a web hosting company, there’s a very good chance your web server is powered by CentOS Linux. This low-cost clone of Red Hat Enterprise Linux isn’t strictly commercial, but since it’s based on RHEL, you can leverage commercial support for it.Short for Community Enterprise Operating System,

CentOS has largely operated as a community-driven project that used the RHEL code, removed all Red Hat’s trademarks, and made the Linux server OS available for free use and distribution.In 2014 the focus shifted following Red Hat and CentOS announcing they would collaborate going forward and that CentOS would serve to address the gap between the community-innovation-focused Fedora platform and the enterprise-grade, commercially-deployed Red Hat Enterprise Linux platform.CentOS will continue to deliver a community-oriented operating system with a mission of helping users develop and adopt open source technologies on a Linux server distribution that is more consistent and conservative than Fedora’s more innovative role.At the same time, CentOS will remain free, with support provided by the community-led CentOS project rather than through Red Hat. CentOS released CentOS 7.2 in December 2015, which is derived from Red Hat Enterprise Linux 7.2.

Debian

If you’re confused by Debian’s inclusion here, don’t be. Debian doesn’t have formal commercial support but you can connect with Debian-savvy consultants around the world via theirConsultants page. Debian originated in 1993 and has spawned more child distributions than any other parent Linux distribution, including Ubuntu, Linux Mint and Vyatta.

Debian remains a popular option for those who value stability over the latest features. The latest major stable version of Debian, Debian 8 “jessie,” was released in April 2015, and it will be supported for five years.Debian 8 marks the switch to the systemd init system over the old SysVinit init system, and includes the latest releses of the Linux Kernel, Apache, LibreOffice, Perl, Python, Xen Hypervisor, GNU Compiler Collection and the GNOME and Xfce desktop environments.The latest update for Debian 8, version 8.4, debuted on April 2nd, 2016.

Oracle Linux

If you didn’t know that Oracle produces its own Linux distribution, you’re not alone. Oracle Linux (formerly Oracle Enterprise Linux) is Red Hat Enterprise Linux fortified with Oracle’s own special Kool-Aid as well as various Oracle logos and art added in.Oracle’s Linux competes directly with Red Hat’s Linux server distributions, and does so quite effectively since purchased support through Oracle is half the price of Red Hat’s equivalent model.

Optimized for Oracle’s database services, Oracle Linux is a heavy contender in the enterprise Linux market. If you run Oracle databases and want to run them on Linux, you know the drill: Call Oracle.The latest release of Oracle Linux, version 7.2, arrived in November 2015 and is based on RHEL 7.2.

Mageia / Mandriva

Mageia is an open-source-based fork of Mandriva Linux that made its debut in 2011. The most recent release, Mageia 5, became available in June 2015, and Mageia 6 is expected to debut in late June 2016.

For U.S.-based executive or technical folks, Mageia and its predecessor Mandriva might be a bit foreign. The incredibly well-constructed Mandriva Linux distribution hails from France and enjoys extreme acceptance in Europe and South America. The Mandriva name and its construction derive from the Mandrake Linux and Connectiva Linux distributions.Mageia maintains the strengths of Mandriva while continuing its development with new features and capabilities, as well as support from the community organization Mageia.Org. Mageia updates are typically released on a 9-month release cycle, with each release supported for two cycles (18 months).As for Mandriva Linux, the Mandriva SA company continues its business Linux server projects, which are now based on Mageia code.

There are covered two cases: testing using the Docker executor and testing using the Shell executor.

Test PHP projects using the Docker executor

While it is possible to test PHP apps on any system, this would require manual configuration from the developer. To overcome this we will be using the official PHP docker image that can be found in Docker Hub.

This will allow us to test PHP projects against different versions of PHP. However, not everything is plug ‘n’ play, you still need to configure some things manually.

As with every build, you need to create a valid .gitlab-ci.yml describing the build environment.

Let’s first specify the PHP image that will be used for the build process (you can read more about what an image means in the Runner’s lingo reading about Using Docker images).

Start by adding the image to your .gitlab-ci.yml:

image: php:5.6

The official images are great, but they lack a few useful tools for testing. We need to first prepare the build environment. A way to overcome this is to create a script which installs all prerequisites prior the actual testing is done.

Let’s create a ci/docker_install.sh file in the root directory of our repository with the following content:

#!/bin/bash# We need to install dependencies only for Docker[[!-e /.dockerenv ]]&& [[!-e /.dockerinit ]]&&exit 0
set-xe# Install git (the php image doesn't have it) which is required by composerapt-get update -yqq
apt-get install git -yqq
# Install phpunit, the tool that we will use for testingcurl --location --output /usr/local/bin/phpunit https://phar.phpunit.de/phpunit.phar
chmod +x /usr/local/bin/phpunit
# Install mysql driver# Here you can install any other extension that you needdocker-php-ext-install pdo_mysql

You might wonder what docker-php-ext-install is. In short, it is a script provided by the official php docker image that you can use to easilly install extensions. For more information read the the documentation at https://hub.docker.com/r/_/php/.

Now that we created the script that contains all prerequisites for our build environment, let’s add it in .gitlab-ci.yml:

...before_script:- bash ci/docker_install.sh > /dev/null...

Last step, run the actual tests using phpunit:

...test:app: script: - phpunit --configuration phpunit_myapp.xml...

Finally, commit your files and push them to GitLab to see your build succeeding (or failing).

Using phpenv also allows to easily configure the PHP environment with:

phpenv config-add my_config.ini

Important note: It seems phpenv/phpenvis abandoned. There is a fork at madumlao/phpenv that tries to bring the project back to life. CHH/phpenv also seems like a good alternative. Picking any of the mentioned tools will work with the basic phpenv commands. Guiding you to choose the right phpenv is out of the scope of this tutorial.

Install custom extensions

Since this is a pretty bare installation of the PHP environment, you may need some extensions that are not currently present on the build machine.

To install additional extensions simply execute:

pecl install <extension>

It’s not advised to add this to .gitlab-ci.yml. You should execute this command once, only to setup the build environment.

Extend your tests

Using atoum

Instead of PHPUnit, you can use any other tool to run unit tests. For example you can use atoum:

Access private packages / dependencies

If your test suite needs to access a private repository, you need to configure the SSH keys in order to be able to clone it.

Use databases or other services

Most of the time you will need a running database in order for your tests to run. If you are using the Docker executor you can leverage Docker’s ability to link to other containers. In GitLab Runner lingo, this can be achieved by defining a service.