For the past eight years, Austria has been struggling with the thorny issue of pirate site blocking. Local ISPs have put up quite a fight but site blocking is now a reality, albeit with a certain amount of confusion.

After a dizzying route through the legal system, last November the Supreme Court finally ruled that The Pirate Bay and other “structurally-infringing” sites including 1337x.to and isohunt.to can be blocked, if rightsholders have exhausted all other options.

The Court based its decision on the now-familiar BREIN v Filmspeler and BREIN v Ziggo and XS4All cases that received European Court of Justice rulings last year. However, there is now an additional complication, this time on the net neutrality front.

After being passed in October 2015 and coming into force in April 2016, the Telecom Single Market (TSM) Regulation established the principle of non-discriminatory traffic management in the EU. The regulation still allows for the blocking of copyright-infringing websites but only where supported by a clear administrative or judicial decision. This is where T-Mobile sees a problem.

In addition to blocking sites named specifically by the court, copyright holders also expect the ISP to block related platforms, such as clones and mirrors, that aren’t specified in the same manner.

So, last week, after blocking several obscure Pirate Bay clones such as proxydl.cf, the ISP reported itself to the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR) for a potential net neutrality breach.

“It sounds paradoxical, but this should finally bring legal certainty in a long-standing dispute over pirate sites. T-Mobile Austria has filed with regulatory authority RTR a kind of self-report, after blocking several sites on the basis of a warning by rights holders,” T-Mobile said in a statement.

“The background to the communication to the RTR, through which T-Mobile intends to obtain an assessment by the regulator, is a very unsatisfactory legal situation in which operators have no opportunity to behave in conformity with the law.

“The service provider is forced upon notification by the copyright owner to even judge about possible copyright infringements. At the same time, the provider is violating the principle of net neutrality by setting up a ban.”

T-Mobile says the problem is complicated by rightsholders who, after obtaining a blocking order forcing named ISPs to block named pirate sites (as required under EU law), send similar demands to other ISPs that were not party to court proceedings. The rightsholders also send blocking demands when blocked sites disappear and reappear under a new name, despite those new names not being part of the original order.

According to industry body Internet Service Providers Austria (ISPA), there is a real need for clarification. It’s hoped that T-Mobile reporting itself for a potential net neutrality breach will have the desired effect.

“For more than two years, we have been trying to find a solution with the involved interest groups and the responsible ministry, which on the one hand protects the rights of the artists and on the other hand does not force the providers into the role of a judge,” complains Maximilian Schubert, Secretary General of the ISPA.

“The willingness of the rights holders to compromise had remained within manageable limits. Now they are massively increasing the pressure and demanding costly measures, which the service providers see as punishment for them providing legal security for their customers for many years.”

ISPA hopes that the telecoms regulator will now help to clear up this uncertainty.

“We now hope that the regulator will give a clear answer here. Because from our point of view, the assessment of legality cannot and should not be outsourced to companies,” Schubert concludes.

In 2012, file-hosting site Megaupload was shut down by the United States government and founder Kim Dotcom and his associates were arrested in New Zealand.

Ever since, the US government has sought to extradite Dotcom on several counts including copyright infringement, racketeering, and money laundering. Dotcom has fought them every single step of the way.

One of the key areas of conflict has been the validity of the search warrants used to raid his Coatesville home on January 20, 2012. The fight has been meticulous and lengthy but in 2014, following appeals to lower courts, the Supreme Court finally dismissed Dotcom’s appeals that the search warrants weren’t valid.

Dotcom subsequently appealed the High Court decision to the Court of Appeal, a hearing that will go ahead in February 2018. Last summer, the Megaupload founder also “attacked the underpinnings of the extradition process” by filing an eight-point statement of claim for judicial review. This morning the High Court handed down its decision and it looks like bad news for Dotcom

The causes of action presented by the Megaupload founder were varied but began by targeting the validity of the arrest warrants used in January 2012 and by extension every subsequent process, including the extradition effort itself.

“Accordingly, the relief sought includes orders that the extradition proceeding be quashed or set aside and that Mr Dotcom be discharged,” the ruling reads.

However, the Court describes this argument as an abuse of process, noting that the Supreme Court has already upheld the validity of the search warrants and a High Court ruling confirmed the District Court’s finding that Dotcom is eligible for extradition, a process that will soon head to the Court of Appeal.

But Dotcom’s arguments continued, with attacks on the validity of search warrants and a request to quash them and return all property seized under their authority. Another point asserted that a US request to seize Dotcom’s assets in New Zealand was invalid because no extraditable offense had been committed.

Unfortunately for Dotcom, none of his detailed arguments gained traction with the Hight Court. In his decision, Justice Timothy Brewer sides with the US government which previously described the efforts as “collateral attacks on previous decisions of the Courts and an attempt to pre-empt Mr Dotcom’s appeal.”

The Judge eventually rejected seven out of the eight causes of action in a 22-page ruling (pdf) published this morning.

“I have granted the USA’s application to strike out causes of action 1 to 7 of the statement of claim for judicial review dated 21 July 2017. The proceeding is now ‘live’ only in relation to the eighth cause of action,” Justice Brewer writes.

“I direct that the proceeding be listed for mention in relation to the eighth cause of action in the duty list at 10:00 am on 7 February 2018.”

The eighth point, which wasn’t challenged by the US, concerns the “decision by the Deputy Solicitor-General in June 2017 to direct that clones be made of the electronic devices seized from Mr Dotcom’s homes and that they be sent to the USA.”

A few minutes ago, Dotcom took to Twitter with an apparent upbeat reference to the ruling.

Last November, the cybercrime unit of the French military police shut down the country’s largest pirate site, Zone-Telechargement (Download Zone). This was a huge problem for the millions of people who visited the site on a daily basis.

Founded in 2011, Zone-Telechargement’s popularity soared after the closure of Megaupload, which was also hugely popular in France until its shutdown early 2012. It’s been dead ever since though, despite suggestions it might somehow return to life.

Interestingly, however, a site claiming to be a reincarnation of the original is now trying to scoop up traffic, with promises that the excitement can be found at a new URL.

“Welcome to the new Zone-Telechargement! This is the new address of the indexing site to find movies and series,” a notice on the site reads.

“We make every effort to ensure that you can watch your movies and series in the best conditions and in complete safety. Therefore, we invite you as a Zone-Telechargement user to help us in our big mission! Share our site, talk about it!”

This cloned pirate site is not what it seems

During the past couple of days, people have certainly been talking about it, but not for the usual reasons. As reported by NextInpact, the site already has 100,000 links on Google after being launched sometime in August.

But this is no ordinary pirate site. In fact, it’s not a pirate site at all. While it looks exactly like its pirate namesake, the site links only to legal content on platforms such as Amazon, iTunes, and other official sources.

NextInpact reports that the site is hosted in France and uses film posters and metadata hosted by the National Film Center, which grants official vendors access to a database of supporting content to help them sell their products online.

So, could this be an innovative and unconventional service set up by elements of the film industry to suck in pirates, perhaps?

TF decided to look into the possibility by pulling information from WHOIS, DNS and MX records, hoping to find a trace of who’s behind the operation. None of the searches yielded much information of direct value but they did turn up something else.

Zone-Telechargement.al, it seems, is not on its own. Hosted on the same server at OVH in France is Voirfilms.al which clones VoirFilms.org, a pirate site that was ordered to be blocked by the Paris District Court earlier this year.

Two peas in a pod on the same server

Just like Zone-Telechargement.al, Voirfilms.al only links to legal content. However, when one searches for movies, at least the first two sets of links to content contain affiliate codes for Amazon and a local service, meaning the site’s operators get a kickback from any sale.

Given they use the same host server, mail server, and referral codes (tag=blue0d7-21 for Amazon), we considered it likely that the same people are behind both domains, passing them off as pirate sites in an effort to generate revenue.

Then, on Friday afternoon, NextInpact editor Marc Rees contacted us with a really interesting update. After further research, Rees had concluded that anti-piracy outfit Blue Efficience was probably behind the scheme. Sure enough, after contacting founder and CEO Thierry Chevillard, the company confirmed the project.

“We always had the idea to promote the legal offer. Anti-piracy protection is good, but it is insufficient without this component,” Chevillard told Rees.

Chevillard said that since video-on-demand platforms have difficulties in getting themselves noticed over pirate sites, his company took the decision to mimic the pirate strategy.

“[T]he pirate sites are extremely talented at putting themselves ahead in search engines where they beat the legal offers,” he said, adding that using similar weapons was the solution.

Chevillard told NextInpact that his company initially published links to content without the affiliate kickback but later took the for-profit route in order to “partially offset the costs, even if we are far from covering the costs of developing and operating the site.”

Of course, there’s a certain irony in an anti-piracy outfit actively pirating a pair of pirate sites, particularly since it clearly pirated the pirate sites’ logos and graphics, in order to pass the clones off as the real thing. However, Chevillard sees them as fair game and says his company will take action in the unlikely event the pirates take legal action.

The big question, of course, is whether the clone sites are having the desired effect of encouraging legal purchases. According to early data from Zone-Telechargement.al, around five purchases are made out of every 1000 clicks on content listed by the site.

While Blue Efficience’s cover has been well and truly blown, the company is undeterred and says it will expand its pirate site cloning business. If the strategy reaches any scale, that could be a whole new level of spam for would-be pirates to wade through. Nevertheless, there is a comedy ending to this story.

It appears that since the fake sites are so convincing, rival anti-piracy outfits have been asking Google to take down pages (1,2) from its indexes. Most ‘impressive’ are the efforts from takedown outfit Rivendel, which has filed dozens of complaints against these ‘pirate’ sites. Ouch.

In my career I’ve frequently spent time waiting on some representative sample of data to use in development, experiments, or analytics. If I had a 2TB database it could take hours just waiting for a copy of the data to be ready before I could peform my tasks. Even within RDS MySQL, I would still have to wait several hours for a snapshot copy to complete before I was able to test a schema migration or perform some analytics. Aurora solves this problem in a very interesting way.

The distributed storage engine for Aurora allows us to do things which are normally not feasible or cost-effective with a traditional database engine. By creating pointers to individual pages of data the storage engine enables fast database cloning. Then, when you make changes to the data in the source or the clone, a copy-on-write protocol creates a new copy of that page and updates the pointers. This means my 2TB snapshot restore job that used to take an hour is now ready in about 5 minutes – and most of that time is spent provisioning a new RDS instance.

The time it takes to create the clone is independent of the size of the database since we’re pointing at the same storage. It also makes cloning a very cost-effective operation since I only pay storage costs for the changed pages instead of an entire copy. The database clone is still a regular Aurora Database Cluster with all the same durability guarentees.

It took about 5 minutes and 30 seconds for my clone to become available and I started making some large schema changes and saw no performance impact. The schema changes themselves completed faster than they would have on traditional MySQL due to improvements the Aurora team made to enable faster DDL operations. I could subsequently create a clone-of-a-clone or even a clone-of-a-clone-of-a-clone (and so on) if I wanted to have another team member perform some tests on my schema changes while I continued to make changes of my own. It’s important to note here that clones are first class databases from the perspective of RDS. I still have all of the features that every other Aurora database supports: snapshots, backups, monitoring and more.

I hope this feature will allow you and your teams to save a lot of time and money on experimenting and developing applications based on Amazon Aurora. You can read more about this feature in the Amazon Aurora User Guide and I strongly suggest following the AWS Database Blog. Anurag Gupta’s posts on quorums and Amazon Aurora storage are particularly interesting.

Have follow-up questions or feedback? Ping us at [email protected], or leave a comment here. We’d love to get your thoughts and suggestions.

This is a guest post by Jeremy Winters and Ritu Mishra, Solution Architects at Full 360. In their own words, “Full 360 is a cloud first, cloud native integrator, and true believers in the cloud since inception in 2007, our focus has been on helping customers with their journey into the cloud. Our practice areas – Big Data and Warehousing, Application Modernization, and Cloud Ops/Strategy – represent deep, but focused expertise.”

AWS Glue is a fully managed ETL (extract, transform, and load) service that makes it simple and cost-effective to categorize your data, clean it, enrich it, and move it reliably between various data stores. As a company who has been building data warehouse solutions in the cloud for 10 years, we at Full 360 were interested to see how we can leverage AWS Glue in customer solutions. This post details our experience and lessons learned from using AWS Glue for an Amazon Redshift data integration use case.

UI-based ETL Tools

We have been anticipating the release of AWS Glue since it was announced at re:Invent 2016. Many of our customers are looking for an easy to use, UI-based tooling to manage their data transformation pipeline. However in our experience, the complexity of any production pipeline tends to be difficult to unwind, regardless of the technology used to create them. At Full 360, we build cloud-native, script-based applications deployed in containers to handle data integration. We think script-based transformation provides a balance of robustness and flexibility necessary to handle any data problem that comes our way.

AWS Glue caters both to developers who prefer writing scripts and those who want UI-based interactions. It is possible to initially create jobs using the UI, by selecting data source and target. Under the hood, AWS Glue auto-generates the Python code for you, which can be edited if needed, though this isn’t necessary for the majority of use cases.

Of course, you don’t have to rely on the UI at all. You can simply write your own Python scripts, store them in Amazon S3 with any dependent libraries, and import them into AWS Glue. AWS Glue also supports IDE integration with tools such as PyCharm, and interestingly enough, Zeppelin notebooks! These integrations are targeted toward developers who prefer writing Python themselves and want a cleaner dev/test cycle.

Developers who are already in the business of scripting ETL will be excited by the ability to easily deploy Python scripts with AWS Glue, using existing workflows for source control and CI/CD, and have them deployed and executed in a fully managed manner by AWS. The UI experience of AWS Glue works well, but it is good to know that the tool accommodates those who prefer traditional coding. You can also dig into complex edge cases for data handling where the UI doesn’t cut it.

Serverless!

When AWS Lambda was released, we were excited for its potential to host our ETL processes. With Lambda, you are limited to the five-minute maximum for function execution. We resorted to running Docker containers and Amazon ECS to orchestrate many of our customers long running tasks. With this approach, we are still required to manage the underlying infrastructure.

After a closer look at AWS Glue, we realize that it is a full serverless PySpark runtime, accompanied by an Apache Hive metastore compatible catalog-as-a-service. This means that you are not just running a script on a single core, but instead you have the full power of a multi-worker Spark environment available. If you’re not familiar with the Spark framework, it introduces a new paradigm that allows for the processing of distributed, in-memory datasets. Spark has many uses, from data transformation to machine learning.

In AWS Glue, you use PySpark dataframes to transform data before reaching your database. Dataframes manage data in a way similar to relational databases, so the methods are likely to be familiar to most SQL users. Additionally, you can use SQL in your PySpark code to manipulate the data.

AWS Glue also simplifies the management of the runtime environment by providing you with a DPU setting, which allows you to dial up or down the amount of compute resources used to run your job. One DPU is equivalent to 4 vCPU, 16 GB Mem.

Common use case

We can see that most customers would leverage AWS Glue to load one or many files from S3 into Amazon Redshift. To accelerate this process, you can use the crawler, an AWS console-based utility, to discover the schema of your data and store it in the AWS Glue Data Catalog, whether your data sits in a file or a database. We were able to discover the schemas of our source file and target table, then have AWS Glue construct and execute the ETL job. It worked! We successfully loaded the beer drinkers’ dataset, JSON files used in our advanced tuning class, into Amazon Redshift!

Our use case

For our use case, we integrated AWS Glue and Amazon Redshift Spectrum with an open-source project that we initiated called SneaQL. SneaQL is an open source, containerized framework for sneaking interactivity into static SQL. SneaQL provides an extension to ANSI SQL with command tags to provide functionality such as loops, variables, conditional execution, and error handling to your scripts. It allows you to manage your scripts in an AWS CodeCommit Git repo, which then gets deployed and executed in a container.

We use SneaQL for complex data integrations, usually with an ELT model, where the data is loaded into the database, then transformed into fact tables using parameterized SQL. SneaQL enables advanced use cases like partial upsert aggregation of data, where multiple data sources can merge into the same fact table.

We think AWS Glue, Redshift Spectrum, and SneaQL offer a compelling way to build a data lake in S3, with all of your metadata accessible through a variety of tools such as Hive, Presto, Spark, and Redshift Spectrum). Build your aggregation table in Amazon Redshift to drive your dashboards or other high-performance analytics.

In the video below, you see a demonstration of using AWS Glue to convert JSON documents into Parquet for partitioned storage in an S3 data lake. We then access the data from S3 into Amazon Redshift by way of Redshift Spectrum. Nearing the end of the AWS Glue job, we then call AWS boto3 to trigger an Amazon ECS SneaQL task to perform an upsert of the data into our fact table. All the sample artifacts needed for this demonstration are available in the Full360/Sneaql Github repository.

In this scenario, AWS Glue manipulates data outside of a data warehouse and loads it to Amazon Redshift, and SneaQL manipulates data inside Amazon Redshift.

We think that they are a good compliment to each other when doing complex integrations.

At the end of the AWS Glue script, the AWS SDK for Python (Boto) is used to trigger the Amazon ECS task that runs SneaQL.

SneaQL container pulls the secrets file from S3 then decrypts it with biscuit or AWS KMS.

SneaQL clones the appropriate branch from an AWS CodeCommit Git repo.

Data in Amazon Redshift is transformed by SneaQL statements stored in the repo.

The AWS Glue team also supports conditional triggers that would allow us to split the jobs, and make one dependent upon the other.

Final thoughts

Although, as a general rule, we believe in managing schema definitions in source control, we definitely think that the crawler feature has some attractive perks. Besides being handy for ad-hoc jobs, it is also partition-aware. This means that you can perform data movements without worrying about which data has been processed already, which is one less thing for you to manage. We’re interested to see the design patterns that emerge around the use of the crawler.

The freedom of PySpark also allows us to implement advanced use cases. While the boto3 library is available to all AWS Glue jobs, it is also possible to include any libraries and additional JAR files along with your Python script.

We think many customers will find AWS Glue valuable. Those who prefer to code their data processing pipeline will be quick to realize how powerful it is, especially for solving complex data cleansing problems. The learning curve for PySpark is real, so we suggest looking into dataframes as a good entry point. In the long run, the fact that AWS Glue is serverless makes the effort more than worth it.

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

A CodeDeploy application with a deployment group configured with the Automatically copy Auto Scaling group setting.

The following Lambda functions:

A function to get the Amazon-provided source AMI ID based on region and architecture.

A function to update the Systems Manager parameter with the golden AMI ID.

A function to update the CodeDeploy deployment group with required blue/green configurations. (Currently AWS CloudFormation does not support creating a deployment group with blue/green deployment configurations.)

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

After all the files are pushed successfully, the repository should look like this:

Part 3: Setting up CodePipeline to enable blue/green deployments

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.

About the author

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

After Tijuana Rick’s father-in-law came by a working 1969 Wurlitzer 3100 jukebox earlier this year, he and Tijuana Rick quickly realised they lacked the original 45s to play on it. When they introduced a Raspberry Pi 3 into the mix, this was no longer an issue.

Tijuana Rick

Yes, I shall be referring to Rick as Tijuana Rick throughout this blog post. Be honest, wouldn’t you if you were writing about someone whose moniker is Tijuana Rick?

Wurlitzer

The Wurlitzer jukebox has to be one of the classic icons of Americana. It evokes images of leather-booth-lined diners filled with rock ‘n’ roll music and teddy-haired bad boys eyeing Cherry Cola-sipping Nancys and Sandys across the checkered tile floor.

image courtesy of Ariadna Bach

With its brightly lit exterior and visible record-changing mechanism, the Wurlitzer is more than just your average pub jukebox. I should know: I have an average pub jukebox in my house, and although there’s some wonderfully nostalgic joy in pressing its buttons to play my favourite track, it’s not a Wurlitzer.

Americana – exactly what it says on the tin jukebox

The Wurlitzer company was founded in 1853 by a German immigrant called – you guessed it – Rudolf Wurlitzer, and at first it imported stringed instruments for the U.S. military. When the company moved from Ohio to New York, it expanded its production range to electric pianos, organs, and jukeboxes.

And thus ends today’s history lesson.

Tijuana Rick and the Wurlitzer

Since he had prior experience in repurposing physical switches for digital ends, Tijuana Rick felt confident that he could modify the newly acquired jukebox to play MP3s while still using the standard, iconic track selection process.

In order to do this, however, he had to venture into brand-new territory: mould making. Since many of the Wurlitzer’s original buttons were in disrepair, Tijuana Rick decided to try his hand at making moulds to create a set of replacements. Using an original button, he made silicone moulds, and then produced perfect button clones in exactly the right shade of red.

Then he turned to the computing side of the project. While he set up an Arduino Mega to control the buttons, Tijuana Rick decided to use a Raspberry Pi to handle the audio playback. After an extensive online search for code inspiration, he finally found this script by Thomas Sprinkmeier and used it as the foundation for the project’s software.

Fixer-uppers

We see a lot of tech upgrades and restorations using Raspberry Pis, from old cameras such as this Mansfield Holiday Zoom, and toys like this beloved Teddy Ruxpin, to… well… dinosaurs. If a piece of retro tech has any room at all for a Pi or a Pi Zero, someone in the maker community is bound to give it a 21st century overhaul.

What have been your favourite Pi retrofit projects so far? Have you seen a build that’s inspired you to restore or recreate something from your past? Got any planned projects or successful hacks? Make sure to share them in the comments below!

In recent years many pirates have moved from more traditional download sites and tools, to streaming portals.

These streaming sites come in all shapes and sizes, and there is fierce competition among site owners to grab the most traffic. More traffic means more money, after all.

While building a streaming from scratch is quite an operation, there are scripts on the market that allow virtually anyone to set up their own streaming index in just a few minutes.

TVStreamCMS is one of the leading players in this area. To find out more we spoke to one of the people behind the project, who prefers to stay anonymous, but for the sake of this article, we’ll call him Rick.

“The idea came up when I wanted to make my own streaming site. I saw that they make a lot of money, and many people had them,” Rick tells us.

After discovering that there were already a few streaming site scripts available, Rick saw an opportunity. None of the popular scripts at the time offered automatic updates with freshly pirated content, a gap that was waiting to be filled.

“I found out that TVStreamScript and others on ThemeForest like MTDB were available, but these were not automatized. Instead, they were kinda generic and hard to update. We wanted to make our own site, but as we made it, we also thought about reselling it.”

Soon after TVStreamCMS was born. In addition to using it for his own project, Rick also decided to offer it to others who wanted to run their own streaming portal, for a monthly subscription fee.

TVStreamCMS website

According to Rick, the script’s automated content management system has been its key selling point. The buyers don’t have to update or change much themselves, as pretty much everything is automatized.

This has generated hundreds of sales over the years, according to the developer. And several of the sites that run on the script are successfully “stealing” traffic from the original, such as gomovies.co, which ranks well above the real GoMovies in Google’s search results.

“Currently, a lot of the sites competing against the top level streaming sites are using our script. This includes 123movies.co, gomovies.co and putlockers.tv, keywords like yesmovies fmovies gomovies 123movies, even in different Languages like Portuguese, French and Italian,” Rick says.

The pirated videos that appear on these sites come from a database maintained by the TVStreamCMS team. These are hosted on their own servers, but also by third parties such as Google and Openload.

When we looked at one of the sites we noticed a few dead links, but according to Rick, these are regularly replaced.

“Dead links are maintained by our team, DMCA removals are re-uploaded, and so on. This allows users not to worry about re-uploading or adding content daily and weekly as movies and episodes release,” Rick explains.

While this all sounds fine and dandy for prospective pirates, there are some significant drawbacks.

Aside from the obvious legal risks that come with operating one of these sites, there is also a financial hurdle. The full package costs $399 plus a monthly fee of $99, and the basic option is $399 and $49 per month.

TVStreamCMS subscription plans

There are apparently plenty of site owners who don’t mind paying this kind of money. That said, not everyone is happy with the script. TorrentFreak spoke to a source at one of the larger streaming sites, who believes that these clones are misleading their users.

TVStreamCMS is not impressed by the criticism. They know very well what they are doing. Their users asked for these clone templates, and they are delivering them, so both sides can make more money.

“We’re are in the business to make money and grow the sales,” Rick says.

“So we have made templates looking like 123movies, Yesmovies, Fmovies and Putlocker to accommodate the demands of the buyers. A similar design gets buyers traffic and is very, very effective for new sites, as users who come from Google they think it is the real website.”

The fact that 123Movies changed its name to GoMovies and recently changed to a GoStream.is URL, only makes it easier for clones to get traffic, according to the developer.

“This provides us with a lot of business because every time they change their name the buyers come back and want another site with the new name. GoMovies, for instance, and now Gostream,” Rick notes.

Of course, the infringing nature of the clone sites means that there are many copyright holders who would rather see the script and its associated sites gone. Previously, the Hollywood group FACT managed to shut down TVstreamScript, taking down hundreds of sites that relied on it, and it’s likely that TVStreamCMS is being watched too.

For now, however, more and more clones continue to flood the web with pirated streams.

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

For a niche that has had millions of words written about it over the past 18 years or so, most big piracy stories have had the emotions of people at their core.

When The Pirate Bay was taken down by the police eleven years ago it was global news, but the real story was the sense of disbelief and loss felt by millions of former users. Outsiders may dismiss these feelings, but they are very common and very real.

Of course, those negative emotions soon turned to glee when the site returned days later, but full-on, genuine resurrections are something that few big sites have been able to pull off since. What we have instead today is the sudden disappearance of iconic sites and a scrambling by third-party opportunists to fill in the gaps with look-a-like platforms.

The phenomenon has affected many big sites, from The Pirate Bay itself through to KickassTorrents, YTS/YIFY, and more recently, ExtraTorrent. When sites disappear, it’s natural for former users to look for replacements. And when those replacements look just like the real deal there’s a certain amount of comfort to be had. For many users, these sites provide the perfect antidote to their feelings of loss.

That being said, the clone site phenomenon has seriously got out of hand. Pioneered by players in the streaming site scene, fake torrent sites can now be found in abundance wherever there is a brand worth copying. ExtraTorrent operator SaM knew this when he closed his site last month, and he took the time to warn people away from them personally.

“Stay away from fake ExtraTorrent websites and clones,” he said.

It’s questionable how many listened.

Within days, users were flooding to fake ExtraTorrent sites, encouraged by some elements of the press. Despite having previously reported SaM’s clear warnings, some publications were still happy to report that ExtraTorrent was back, purely based on the word of the fake sites themselves. And I’ve got a bridge for sale, if you have the cash.

While misleading news reports must take some responsibility, it’s clear that when big sites go down a kind of grieving process takes place among dedicated former users, making some more likely to clutch at straws. While some simply move on, others who have grown more attached to a platform they used to call home can go into denial.

This reaction has often been seen in TF’s mailbox, when YTS/YIFY went down in particular. More recently, dozens of emails informed us that ExtraTorrent had gone, with many others asking when it was coming back. But the ones that stood out most were from people who had read SaM’s message, read TF’s article stating that ALL clones were fakes, yet still wanted to know if sites a, b and c were legitimate or not.

We approached a user on Reddit who asked similar things and been derided by other users for his apparent reluctance to accept that ExtraTorrent had gone. We didn’t find stupidity (as a few in /r/piracy had cruelly suggested) but a genuine sense of loss.

“I loved the site dude, what can I say?” he told TF. “Just kinda got used to it and hung around. Before I knew it I was logging in every day. In time it just felt like home. I miss it.”

The user hadn’t seen the articles claiming that one of the imposter ExtraTorrent sites was the real deal. He did, however, seem a bit unsettled when we told him it was a fake. But when we asked if he was going to stop using it, we received an emphatic “no”.

“Dude it looks like ET and yeah it’s not quite the same but I can get my torrents. Why does it matter what crew [runs it]?” he said.

It does matter, of course. The loss of a proper torrent site like ExtraTorrent, which had releasers and a community, can never be replaced by a custom-skinned Pirate Bay mirror. No matter how much it looks like a lost friend, it’s actually a pig in lipstick that contributes little to the ecosystem.

That being said, it’s difficult to counter the fact that some of these clones make people happy. They fill a void that other sites, for mainly cosmetic reasons, can’t fill. With this in mind, the grounds for criticism weaken a little – but not much.

For anyone who has watched the Black Mirror episode ‘Be Right Back‘, it’s clear that sudden loss can be a hard thing for humans to accept. When trying to fill the gap, what might initially seem like a good replacement is almost certainly destined to disappoint longer term, when the sub-standard copy fails to capture the heart and soul of the real deal.

It’s an issue that will occupy the piracy scene for some time to come, but interestingly, it’s also an argument that Hollywood has used against piracy itself for decades. But that’s another story.

When ExtraTorrent shut down last week, millions of people were left without their favorite spot to snatch torrents.

This meant that after the demise of KickassTorrents and Torrentz last summer, another major exodus commenced.

The search for alternative torrent sites is nicely illustrated by Google Trends. Immediately after ExtraTorrent shut down, worldwide searches for “torrent sites” shot through the roof, as seen below.

“Torrent sites” searches (30 days)

As is often the case, most users spread across sites that are already well-known to the file-sharing public.

TorrentFreak spoke to several people connected to top torrent sites who all confirmed that they had witnessed a significant visitor boost over the past week and a half. As the largest torrent site around, many see The Pirate Bay as the prime alternative.

And indeed, a TPB staffer confirms that they have seen a big wave of new visitors coming in, to the extent that it was causing “gateway errors,” making the site temporarily unreachable.

Thus far the new visitors remain rather passive though. The Pirate Bay hasn’t seen a large uptick in registrations and participation in the forum remains normal as well.

“Registrations haven’t suddenly increased or anything like that, and visitor numbers to the forum are about the same as usual,” TPB staff member Spud17 informs TorrentFreak.

Another popular torrent site, which prefers not to be named, reported a surge in traffic too. For a few days in a row, this site handled 100,000 extra unique visitors. A serious number, but the operator estimates that he only received about ten percent of ET’s total traffic.

More than 40% of these new visitors come from India, where ExtraTorrent was relatively popular. The site operator further notes that about two thirds have an adblocker, adding that this makes the new traffic pretty much useless, for those who are looking to make money.

That brings us to the last category of site owners, the opportunist copycats, who are actively trying to pull estranged ExtraTorrent visitors on board.

Earlier this week we wrote about the attempts of ExtraTorrent.cd, which falsely claims to have a copy of the ET database, to lure users. In reality, however, it’s nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

And then there are the copycats over at ExtraTorrent.ag. These are the same people who successfully hijacked the EZTV and YIFY/YTS brands earlier. With ExtraTorrent.ag they now hope to expand their portfolio.

Over the past few days, we received several emails from other ExtraTorrent “copies”, all trying to get a piece of the action. Not unexpected, but pretty bold, particularly considering the fact that ExtraTorrent operator SaM specifically warned people not to fall for these fakes and clones.

With millions of people moving to new sites, it’s safe to say that the torrent ‘community’ is in turmoil once again, trying to find a new status quo. But this probably won’t last for very long.

While some of the die-hard ExtraTorrent fans will continue to mourn the loss of their home, history has told is that in general, the torrent community is quick to adapt. Until the next site goes down…

The only strong message sent out by ExtraTorrent’s operator was to “stay away from fake ExtraTorrent websites and clones.”

Fast forward a few days and the first copycats have indeed appeared online. While this was expected, it’s always disappointing to see “news” sites including the likes of Forbes and The Inquirer are giving them exposure without doing thorough research.

“We are a group of uploaders and admins from ExtraTorrent. As you know, SAM from ExtraTorrent pulled the plug yesterday and took all data offline under pressure from authorities. We were in deep shock and have been working hard to get it back online with all previous data,” the email, sent out to several news outlets read.

What followed was a flurry of ‘ExtraTorrent is back’ articles and thanks to those, a lot of people now think that Extratorrent.cd is a true resurrection operated by the site’s former staffers and fans.

However, aside from its appearance, the site has absolutely nothing to do with ET.

The site is an imposter operated by the same people who also launched Kickass.cd when KAT went offline last summer. In fact, the content on both sites doesn’t come from the defunct sites they try to replace, but from The Pirate Bay.

Yes indeed, ExtraTorrent.cd is nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

There are several signs clearly showing that the torrents come from The Pirate Bay. Most easy to spot, perhaps, is a comparison of search results which are identical on both sites.

Chaparall seach on Extratorrent.cd

The ExtraTorrent “resurrection” even lists TPB’s oldest active torrent from March 2004, which was apparently uploaded long before the original ExtraTorrent was launched.

Chaparall search on TPB

TorrentFreak is in touch with proper ex-staffers of ExtraTorrent who agree that the site is indeed a copycat. Some ex-staffers are considering the launch of a new ET version, just like the KAT admins did in the past, but if that happens, it will take a lot more time.

“At the moment we are all figuring out how to go about getting it back up and running in a proper fashion, but as you can imagine there a lot of obstacles and arguments, lol,” ex-ET admin Soup informed us.

So, for now, there is no real resurrection. ExtraTorrent.cd sells itself as much more than it is, as it did with Kickass.cd. While the site doesn’t have any malicious intent, aside from luring old ET members under false pretenses, people have the right to know what it really is.

When authorities in the United States and New Zealand shut down Megaupload in 2012, large amounts of data were seized in both locations. The data in the US is currently gathering dust but over in New Zealand yet another storm is brewing.

In the weeks following the raid, hard drives seized from Dotcom in New Zealand were cloned and sent to the FBI in the United States. A judge later found that this should not have been allowed, ruling that the copies in the FBI’s possession must be destroyed.

Like almost every process in the Megaupload saga the ruling went to appeal and in 2014 Dotcom won again, with the Court of Appeal upholding the lower court’s decision, stating that the removal of the clones to the United States was “plainly not authorized.”

At the time Dotcom said that fighting back is “encoded in his DNA” and today he’s taking that fight to the FBI. On Sunday, FBI director James Comey touched down in Queenstown, New Zealand, for an intelligence conference. With Comey in the country, Dotcom seized the moment to file a complaint with local police.

In the complaint shared with TorrentFreak, lawyer Simon Cogan draws police attention to the Court of Appeal ruling determining that clones of Dotcom drives were unlawfully shipped to the FBI in the United States. Since Comey is in the country, police should take the opportunity to urgently interview him over this potential criminal matter.

“As director of the FBI, Mr Comey will be able to assist Police with their investigation of the matters raised in Mr Dotcom’s complaint,” the complaint reads, noting several key areas of interest as detailed below.

Speaking with TF, Dotcom says that since the New Zealand High Court and Court of Appeal have both ruled that the FBI had no authority to remove his data from New Zealand, the FBI acted unlawfully.

“In simple terms the FBI has committed theft,” Dotcom says.

“The NZ courts don’t have jurisdiction in the US and could therefore not assist me in getting my data back. But FBI Director Comey has just arrived in New Zealand for a conference meaning he is in the jurisdiction of NZ courts. We have asked the NZ police to question Mr Comey about the theft and to investigate.”

In addition to seeking assistance from the police, Dotcom says that he’s also initiated a new lawsuit to have his data returned.

“We have also launched a separate civil court action to force Mr Comey to return my data to New Zealand and to erase any and all copies the FBI / US Govt holds. We expect an urgent hearing of the matter in the High Court tomorrow,” Dotcom concludes.

It’s likely that this will be another Dotcom saga that will run and run, but despite the seriousness of the matter in hand, Dotcom was happy to take to Twitter this morning, delivering a video message in his own inimitable style.

If you’re a Pi fan, you’ll recognise our official case, designed by Kinneir Dufort. We’re rather proud of it, and if sales are anything to go by, you seem to like it a lot as well.

Unfortunately, some scammers in China have also spotted that Pi owners like the case a lot, so they’ve been cloning it and trying to sell it in third-party stores.

We managed to get our hands on a sample through a proxy pretending to be a Pi shop, and we have some pictures so you can see what the differences are and ensure that you have the genuine article. The fake cases are not as well-made as the real thing, and they also deprive us of some much-needed charitable income. As you probably know, the Raspberry Pi Foundation is a charity. All the money we make from selling computers, cases, cameras, and other products goes straight into our charitable fund to train teachers, provide free learning resources, teach kids, help build the foundations of digital making in schools, and much more.

Let’s do a bit of spot-the-difference.

Fake case. Notice the poor fit, the extra light pipes (the Chinese cloner decided not to make different cases for Pi2 and Pi3), and the sunken ovals above them.

Real case. Only one set of light pipes (this case is for a Pi3), no ovals, and the whole thing fits together much more neatly. There’s no lip in the middle piece under the lid.

There are some other telltale signs: have a close look at the area around the logo on the white lid.

This one’s the fake. At about the 7 o’clock position, the plastic around the logo is uneven and ripply – the effect’s even more pronounced in real life.

This is what a real case looks like. The logo is much more crisp and cleanly embossed, and there are no telltale lumps and bumps around it.

The underside’s a bit off as well:

The cloners are using a cheaper, translucent non-slip foot on the fake case, and the feet don’t fit well in the lugs which house them. Again, you can see that the general fit is quite bad.

Real case. Near-transparent non-slip feet, centred in their housing, and with no shreds of escaping glue. There’s no rectangular tooling marks on the bottom. The SD card slot is a different shape.

Please let us know if you find any of these fake cases in the wild. And be extra-vigilant if you’re buying somewhere like eBay to make sure that you’re purchasing the real thing. We also make a black and grey version of the case, although the pink and white is much more popular. We haven’t seen these cloned yet, but if you spot one we’d like to know about it, as we can then discuss them with the resellers. It’s more than possible that retailers won’t realise they’re buying fakes, but it damages our reputation when something shonky comes on the market and it looks like we’ve made it. It damages the Raspberry Pi Foundation’s pockets too, which means we can’t do the important work in education we were set up to do.

Editor’s note: We posted this article originally in August 2016. We’ve since updated it with new information.

APFS, or Apple File System, is one of the biggest changes coming to every new Apple device. It makes its public debut with the release of iOS 10.3, but it’s also coming to the Mac (in fact, it’s already available if you’re a developer). APFS changes how the Mac, iPhone and iPad store files. Backing up your data is our job, so we think a new file system is fascinating. Let’s take a look at APFS to understand what it is and why it’s so important, then answer some questions about it.

File systems are a vital component of any computer or electronic device. The file system tells the computer how to interact with data. Whether it’s a picture you’ve taken on your phone, a Microsoft Word document, or an invisible file the computer needs, the file system accounts for all of that stuff.

File systems may not be the sexiest feature, but the underlying technology is so important that it gets developers interested. Apple revealed plans for APFS at its annual Worldwide Developers Conference in June 2016. APFS thoroughly modernizes the way Apple devices track stored information. APFS also adds some cool features that we haven’t seen before in other file systems.

APFS first appeared with macOS 10.12 Sierra as an early test release for developers to try out. Its first general release is iOS 10.3. Apple will migrate everything to use it in the future. Since our Mac client is a native app, we like many Apple developers, have been boning up on APFS and what it means.

What Is APFS?

Apple hasn’t defined the P in APFS, but that differentiates it from Apple File Service (AFS), a term used to describe older Apple file and network services.

APFS is designed to scale from the smallest Apple device to the biggest. It works with watchOS, tvOS, iOS and macOS, spanning the entire Apple product line. It’s designed from the get-go to work well on modern Apple device architectures and offers plenty of scalability going forward.

APFS won’t change how you see files. The Finder, the main way you interact with files on your Mac, won’t undergo any major cosmetic changes because of APFS (at least none Apple has told us about yet). Neither will iOS, which mostly abstracts file management: the under-the-hood stuff that tells the computer where to put and how to work with data.

Why Did Apple Make APFS?

The current file system Apple uses is HFS+. HFS was introduced in 1985, back when the Mac was still new. That’s right, more than thirty years ago now. (HFS+ came later with some improvements for newer Macs.)

To give you an idea of how “state of the art” has changed since then, consider this. My first Mac, which came out late in 1984, had 512 KB of RAM (four times the original Mac’s memory) and a single floppy drive that could store 400K. This computer I’m writing from now has 8 GB of RAM – almost 16 thousand times more RAM than my first Mac – and 512 GB of storage capacity. That’s more than 1.2 million times the size of that first Mac. Think about that the next time you get a message that your drive is full!

Given the pace of computer technology and development, it’s a bit startling that we still use anything developed so long ago. But that’s how essential and important a file system is to a computer’s operation.

HFS+ was cutting-edge for its time, but Apple made it for computers with floppy disk drives and hard drives. Floppies are long gone. Most Apple devices now use solid state storage like built-in Flash and Solid State Drives (SSDs), and those store data differently than hard drives and floppies did.

Why Is APFS Better?

APFS better suits the needs of today’s and tomorrow’s computers and mobile devices because it’s made for solid-state storage – Flash, and SSDs. These storage technologies work differently than spinning drives do, so it only makes sense to optimize the file system to take advantage.

Apple’s paving the way to store lots more data with APFS. HFS+ supports 32-bit file IDs, for example, while APFS ups that to 64-bit. That means that today, your Mac can keep track of about 4 billion individual pieces of information on its hard drive. APFS ups that to 9 quintillion. That’s a nine followed by 18 zeroes (actually, much more than that, because of hexadecimal values).

Even though APFS can keep track of orders of magnitude more data than HFS+, you’ll see much faster performance. When you need to save or duplicate files, APFS shares data between files whenever possible. Instead of duplicating information like HFS+ does, APFS updates metadata links to the actual stored information. Clones, or copies of files or folders can be created instantly. You won’t have to sit and watch as gigabytes of files are duplicated en masse, wasting extreme amounts of space in the process. In fact, clones take up no additional space, since they’re pointing back to the original data! You’ll get much better bang for your storage buck with APFS than HFS+ can manage.

Speaking of space, Space Sharing is another new feature of APFS. Space Sharing helps the Mac manage free space on its hard drives more efficiently. You can set up multiple partitions, even multiple file systems, on a single physical device, and all of them can share the same space. You presently have to jump through hoops if you’re resizing partitions and want to re-use de-allocated space. APFS views individual physical devices as “containers,” with multiple “volumes” inside.

How Does APFS Affect Performance?

Networking is crucial for almost all computers and computing devices. Over the years there’s been a lot of emphasis on tuning operating system performance for maximum throughput. That’s helpful to developers like us because we store data in the cloud. But that’s not the whole story. Latency – the amount of time between you telling your computer to do something and when it happens – also has a significant effect on performance.

Has “the Beachball of Death” ever plagued you? You’ll click a button or try to open something, and the cursor changes to a spinning disk that looks for all the world like a beachball. Apple’s doing a lot more with APFS to make beachballs go away. That’s because they’re prioritizing latency – the amount of time between when you ask your device to do something and when it does it.

Apple has found other ways to improve performance wherever possible. Take crash protection, for example. HFS+ uses journaling as a form of crash protection: It keeps track of changes not yet made to the file system in log files. Unfortunately, journaling creates performance overhead. Those log files are always being written and read. APFS replaces that with a new copy-on-write metadata scheme that’s much more efficient.

How Is APFS Security?

Apple is very concerned with user privacy. Their protection of their users’ privacy has occasionally put Apple at loggerheads with governments and individuals who want your data. Apple’s taking your privacy seriously with APFS, thanks to much more sophisticated encryption options than before.

Apple’s current encryption scheme is called FileVault. FileVault is “whole disk” encryption. You turn it on, and your Mac encrypts your hard drive. That encrypted data is, for all intents and purposes, unrecognizable unless you enter a password or key to unlock it.

The problem is that FileVault is either on or off, and it’s on or off for the whole volume. So once you’ve unlocked it, your data is potentially vulnerable. APFS still supports full disk encryption, but it can also encrypt individual files and metadata, with single or multi-key support. That provides additional security for your most sensitive data.

As a backup company, one feature of APFS we’re particularly interested in is its support for snapshots. Snapshots are a pretty standard feature of enterprise backups, but we haven’t seen them yet on the Mac. A snapshot contains pointers to data stored on your disk, providing fast access to data stored on the disk. Because the snapshot contains pointers, not the actual data, it’s compact, and accessing it is very fast.

How Do I Get APFS?

If you’ve upgraded to iOS 10.3 or later, your iPhone or iPad has already made the switch. There’s nothing more to do. If you’re a Mac user, you’re best off waiting for now. APFS support on the Mac is still provisional and mainly the purview of developers. But it’s coming soon, and when it does, Apple promises the same sort of seamless conversion that iPhone and iPad customers have.

When the time is right, make sure to back up your Mac before making any essential changes – just as you should with your iPhone or iPad if you haven’t yet installed 10.3. If you need help, head over to our Computer Backup Guide for more tips.

There’s a lot more under the hood in APFS, but that gives you a broad overview of what it is and why we’re excited. We hope you are too. APFS is an “under the hood” enhancement in iOS 10.3 that shouldn’t have any significant effect on how your Apple gear works today, but it paves the way for what’s to come in the future.

Tags

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.