New research published by INCOPRO this week shows that traffic to blocked pirate sites has decreased 53.4% since the first measures were implemented a year ago. In total, usage of the top 250 pirate sites dropped a significant 25.4% in Australia.

In summary, the research confirms that direct traffic to blocked sites has decreased dramatically. Or put differently, the site blocking efforts actually block pirate sites, which by itself should hardly come as a surprise.

In fact, one might wonder how effective the blockades really are when nearly half of all direct traffic to the blocked sites in Australia remains intact and dozens of the country’s ISPs are involved.

On top, it’s also worth mentioning that the research doesn’t take VPN usage into account. Australian interest in VPNs surged after the blockades were announced, so many people are likely to be circumvented the blockades using foreign VPNs.

While VPNs were not factored in, the current research did look at proxy site traffic and concludes that this only substitutes a small portion of the traffic that went to pirate sites before the blockades.

While it’s undoubtedly true that direct traffic to blocked sites has dropped, the research also includes some odd results. For example, it attributes a recent drop in Isohunt.to traffic to the blocking measures, when in reality the site actually shut down.

“ISOHunt usage has been on a downward trend since December 2016, and is now at its lowest on record having reduced by 96.4% since blocking began,” the report reads, drawing on data from Alexa.

But perhaps we’re nitpicking.

Creative Content Australia (CCA) is happy with these results and states that the fight against piracy has claimed a significant victory. However, the anti-piracy group also stressed that more can be done.

“The reduction in piracy is exciting news but that 53% could be 90%,” CCA Chairman Graham Burke says, using the opportunity to take another stab at Google.

“The government has shut the front door, but Google is leading people to the back door, showing no respect for Australian law or courts let alone any regard for the Australian economy and cultural way of life,” Burke adds.

INCOPRO’s research will undoubtedly be used to convince lawmakers that the current site blocking efforts should remain in place.

With this in mind, the release of the report comes at an interesting time. The previously unpublished results were drawn up last December, but were only made public this week, a few days after the Australian Government announced a review of the site blocking measures.

I’m aware this is going to be a very niche topic. Electronically signing PDFs is far from a mainstream usecase. However, I’ll write it for two reasons – first, I think it will be very useful for those few who actually need it, and second, I think it will become more and more common as the eIDAS regulation gain popularity – it basically says that electronic signatures are recognized everywhere in Europe (now, it’s not exactly true, because of some boring legal details, but anyway).

So, what is the usecase – first, you have to electronically sign the PDF with an a digital signature (the legal term is “electronic signature”, so I’ll use them interchangeably, although they don’t fully match – e.g. any electronic data applied to other data can be seen as an electronic signature, where a digital signature is the PKI-based signature).

Second, you may want to actually display the signature on the pages, rather than have the PDF reader recognize it and show it in some side-panel. Why is that? Because people are used to seeing signatures on pages and some may insist on having the signature visible (true story – I’ve got a comment that a detached signature “is not a REAL electronic signature, because it’s not visible on the page”).

Now, notice that I wrote “pages”, on “page”. Yes, an electronic document doesn’t have pages – it’s a stream of bytes. So having the signature just on the last page is okay. But, again, people are used to signing all pages, so they’d prefer the electronic signature to be visible on all pages.

And that makes the task tricky – PDF is good with having a digital signature box on the last page, but having multiple such boxes doesn’t work well. Therefore one has to add other types of annotations that look like a signature box and when clicked open the signature panel (just like an actual signature box).

I have to introduce here DSS – a wonderful set of components by the European Commission that can be used to sign and validate all sorts of electronic signatures. It’s open source, you can use at any way you like. Deploy the demo application, use only the libraries, whatever. It includes the signing functionality out of the box – just check the PAdESService or the PDFBoxSignatureService. It even includes the option to visualize the signature once (on a particular page).

However, it doesn’t have the option to show “stamps” (images) on multiple pages. Which is why I forked it and implemented the functionality. Most of my changes are in the PDFBoxSignatureService in the loadAndStampDocument(..) method. If you want to use that functionality you can just build a jar from my fork and use it (by passing the appropriate SignatureImageParameters to PAdESSErvice.sign(..) to define how the signature will look like).

Why is this needed in the first place? Because when a document is signed, you cannot modify it anymore, as you will change the hash. However, PDFs have incremental updates which allow appending to the document and thus having a newer version without modifying anything in the original version. That way the signature is still valid (the originally signed content is not modified), but new stuff is added. In our case, this new stuff is some “annotations”, which represent an image and a clickable area that opens the signature panel (in Adobe Reader at least). And while they are added before the signature box is added, if there are more than one signer, then the 2nd signer’s annotations are added after the first signature.

Sadly, PDFBox doesn’t support that out of the box. Well, it almost does – the piece of code below looks hacky, and it took a while to figure what exactly should be called and when, but it works with just a single reflection call:

What it does is – loads the original PDF, clears some internal catalogs, adds the annotations (images) to all pages, and then “force-add the annotations” because they “wouldn’t be saved in incremental updates otherwise”. I hope PDFBox make this a little more straightforward, but for the time being this works, and it doesn’t invalidate the existing signatures.

I hope that this posts introduces you to:

the existence of legally binding electronic signatures

the existence of the DSS utilities

the PAdES standard for PDF signing

how to place more than just one signature box in a PDF document

And I hope this article becomes more and more popular over time, as more and more businesses realize they could make use of electronic signatures.

On June 2, 2017 a group of Canadian telecoms giants including Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, filed a complaint in Federal Court against Montreal resident, Adam Lackman.

Better known as the man behind Kodi addon repository TVAddons, Lackman was painted as a serial infringer in the complaint. The telecoms companies said that, without gaining permission from rightsholders, Lackman communicated copyrighted TV shows including Game of Thrones, Prison Break, The Big Bang Theory, America’s Got Talent, Keeping Up With The Kardashians and dozens more, by developing, hosting, distributing and promoting infringing Kodi add-ons.

To limit the harm allegedly caused by TVAddons, the complaint demanded interim, interlocutory, and permanent injunctions restraining Lackman from developing, promoting or distributing any of the allegedly infringing add-ons or software. On top, the plaintiffs requested punitive and exemplary damages, plus costs.

On June 9, 2017 the Federal Court handed down a time-limited interim injunction against Lackman ex parte, without Lackman being able to mount a defense. Bailiffs took control of TVAddons’ domains but the most controversial move was the granting of an Anton Piller order, a civil search warrant which granted the plaintiffs no-notice permission to enter Lackman’s premises to secure evidence before it could be tampered with.

The order was executed June 12, 2017, with Lackman’s home subjected to a lengthy search during which the Canadian was reportedly refused his right to remain silent. Non-cooperation with an Anton Piller order can amount to a contempt of court, he was told.

With the situation seemingly spinning out of Lackman’s control, unexpected support came from the Honourable B. Richard Bell during a subsequent June 29, 2017 Federal Court hearing to consider the execution of the Anton Piller order.

The Judge said that Lackman had been subjected to a search “without any of the protections normally afforded to litigants in such circumstances” and took exception to the fact that the plaintiffs had ordered Lackman to spill the beans on other individuals in the Kodi addon community. He described this as a hunt for further evidence, not the task of preserving evidence it should’ve been.

Justice Bell concluded by ruling that while the prima facie case against Lackman may have appeared strong before the judge who heard the matter ex parte, the subsequent adversarial hearing undermined it, to the point that it no longer met the threshold.

As a result of these failings, Judge Bell vacated the Anton Piller order and dismissed the application for interlocutory injunction.

While this was an early victory for Lackman and TVAddons, the plaintiffs took the decision to an appeal which was heard November 29, 2017. Determined by a three-judge panel and signed by Justice Yves de Montigny, the decision was handed down Tuesday and it effectively turns the earlier ruling upside down.

The appeal had two matters to consider: whether Justice Bell made errors when he vacated the Anton Piller order, and whether he made errors when he dismissed the application for an interlocutory injunction. In short, the panel found that he did.

In a 27-page ruling, the first key issue concerns Justice Bell’s understanding of the nature of both Lackman and TVAddons.

The telecoms companies complained that the Judge got it wrong when he characterized Lackman as a software developer who came up with add-ons that permit users to access material “that is for the most part not infringing on the rights” of the telecoms companies.

The companies also challenged the Judge’s finding that the infringing add-ons offered by the site represented “just over 1%” of all the add-ons developed by Lackman.

“I agree with the [telecoms companies] that the Judge misapprehended the evidence and made palpable and overriding errors in his assessment of the strength of the appellants’ case,” Justice Yves de Montigny writes in the ruling.

“Nowhere did the appellants actually state that only a tiny proportion of the add-ons found on the respondent’s website are infringing add-ons.”

The confusion appears to have arisen from the fact that while TVAddons offered 1,500 add-ons in total, the heavily discussed ‘featured’ addon category on the site contained just 22 add-ons, 16 of which were considered to be infringing according to the original complaint. So, it was 16 add-ons out of 22 being discussed, not 16 add-ons out of a possible 1,500.

“[Justice Bell] therefore clearly misapprehended the evidence in this regard by concluding that just over 1% of the add-ons were purportedly infringing,” the appeals Judge adds.

After gaining traction with Justice Bell in the previous hearing, Lackman’s assertion that his add-ons were akin to a “mini Google” was fiercely contested by the telecoms companies. They also fell flat before the appeal hearing.

Justice de Montigny says that Justice Bell “had been swayed” when Lackman’s expert replicated the discovery of infringing content using Google but had failed to grasp the important differences between a general search engine and a dedicated Kodi add-on.

“While Google is an indiscriminate search engine that returns results based on relevance, as determined by an algorithm, infringing add-ons target predetermined infringing content in a manner that is user-friendly and reliable,” the Judge writes.

“The fact that a search result using an add-on can be replicated with Google is of little consequence. The content will always be found using Google or any other Internet search engine because they search the entire universe of all publicly available information. Using addons, however, takes one to the infringing content much more directly, effortlessly and safely.”

With this in mind, Justice de Montigny says there is a “strong prima facie case” that Lackman, by hosting and distributing infringing add-ons, made the telecoms companies’ content available to the public “at a time of their choosing”, thereby infringing paragraph 2.4(1.1) and section 27 of the Copyright Act.

On TVAddons itself, the Judge said that the platform is “clearly designed” to facilitate access to infringing material since it targets “those who want to circumvent the legal means of watching television programs and the related costs.”

Turning to Lackman, the Judge said he could not claim to have no knowledge of the infringing content delivered by the add-ons distributed on this site, since they were purposefully curated prior to distribution.

“The respondent cannot credibly assert that his participation is content neutral and that he was not negligent in failing to investigate, since at a minimum he selects and organizes the add-ons that find their way onto his website,” the Judge notes.

In a further setback, the Judge draws clear parallels with another case before the Canadian courts involving pre-loaded ‘pirate’ set-top boxes. Justice de Montigny says that TVAddons itself bears “many similarities” with those devices that are already subjected to an interlocutory injunction in Canada.

“The service offered by the respondent through the TVAddons website is no different from the service offered through the set-top boxes. The means through which access is provided to infringing content is different (one relied on hardware while the other relied on a website), but they both provided unauthorized access to copyrighted material without authorization of the copyright owners,” the Judge finds.

Continuing, the Judge makes some pointed remarks concerning the execution of the Anton Piller order. In short, he found little wrong with the way things went ahead and also contradicted some of the claims and beliefs circulated in the earlier hearing.

Citing the affidavit of an independent solicitor who monitored the order’s execution, the Judge said that the order was explained to Lackman in plain language and he was informed of his right to remain silent. He was also told that he could refuse to answer questions other than those specified in the order.

The Judge said that Lackman was allowed to have counsel present, “with whom he consulted throughout the execution of the order.” There was nothing, the Judge said, that amounted to the “interrogation” alluded to in the earlier hearing.

Justice de Montigny also criticized Justice Bell for failing to take into account that Lackman “attempted to conceal crucial evidence and lied to the independent supervising solicitor regarding the whereabouts of that evidence.”

Much was previously made of Lackman apparently being forced to hand over personal details of third-parties associated directly or indirectly with TVAddons. The Judge clarifies what happened in his ruling.

“A list of names was put to the respondent by the plaintiffs’ solicitors, but it was apparently done to expedite the questioning process. In any event, the respondent did not provide material information on the majority of the aliases put to him,” the Judge reveals.

But while not handing over evidence on third-parties will paint Lackman in a better light with concerned elements of the add-on community, the Judge was quick to bring up the Canadian’s history and criticized Justice Bell for not taking it into account when he vacated the Anton Piller order.

“[T]he respondent admitted that he was involved in piracy of satellite television signals when he was younger, and there is evidence that he was involved in the configuration and sale of ‘jailbroken’ Apple TV set-top boxes,” Justice de Montigny writes.

“When juxtaposed to the respondent’s attempt to conceal relevant evidence during the execution of the Anton Piller order, that contextual evidence adds credence to the appellants’ concern that the evidence could disappear without a comprehensive order.”

Dismissing Justice Bell’s findings as “fatally flawed”, Justice de Montigny allowed the appeal of the telecoms companies, set aside the order of June 29, 2017, declared the Anton Piller order and interim injunctions legal, and granted an interlocutory injunction to remain valid until the conclusion of the case in Federal Court. The telecoms companies were also awarded costs of CAD$50,000.

It’s worth noting that despite all the detail provided up to now, the case hasn’t yet got to the stage where the Court has tested any of the claims put forward by the telecoms companies. Everything reported to date is pre-trial and has been taken at face value.

TorrentFreak spoke with Adam Lackman but since he hadn’t yet had the opportunity to discuss the matter with his lawyers, he declined to comment further on the record. There is a statement on the TVAddons website which gives his position on the story so far.

About a year ago we first heard about SkyTorrents, an ambitious new torrent site which guaranteed a private and ad-free experience for its users.

Initially, we were skeptical. However, the site quickly grew a steady userbase through sites such as Reddit and after a few months, it was still sticking to its promise.

“We will NEVER place any ads,” SkyTorrents’ operator informed us last year.

“The site will remain ad-free or it will shut down. When our funds dry up, we will go for donations. We can also handover to someone with similar intent, interests, and the goal of a private and ad-free world.”

In the months that followed, these words turned out to be almost prophetic. It didn’t take long before SkyTorrents had several million pageviews per day. This would be music to the ears of many site owners but for SkyTorrents it was a problem.

With the increase in traffic, the server bills also soared. This meant that the ad-free search engine had to cough up roughly $1,500 per month, which is quite an expensive hobby. The site tried to cover at least part of the costs with donations but that didn’t help much either.

This led to the rather ironic situation where users of the site encouraged the operator to serve ads.

“Everyone is saying they would rather have ads then have the site close down,” one user wrote on Reddit last summer. “I applaud you. But there is a reason why every other site has ads. It’s necessary to get revenue when your customers don’t pay.”

The site’s operator was not easily swayed though, not least because ads also compromise people’s privacy. Eventually funds dried up and now, after the passing of several more months, he has now decided to throw in the towel.

“It was a great experience to serve and satisfy people around the world,” the site’s operator says.

The site is not simply going dark though. While the end has been announced, the site’s operator is giving people the option to download and copy the site’s database of more than 15 million torrents.

Backup

That’s 444 gigabytes of .torrent files for all the archivists out there. Alternatively, the site also offers a listing of just the torrent hashes, which is a more manageable 322 megabytes.

SkyTorrents’ operator says that he hopes someone will host the entire cache of torrents and “take it forward.” In addition, he thanks hosting company NFOrce for the service it has provided.

Whether anyone will pick up the challenge has yet to be seen. What’s has become clear though is that operating a popular ad-free torrent site is hard to pull off for long, unless you have deep pockets.

—

Update: While writing this article Skytorrents was still online, but upon publication, it is no longer accessible. The torrent archive is still available.

Introducing qrocodile, a kid-friendly system for controlling your Sonos with QR codes. Source code is available at: https://github.com/chrispcampbell/qrocodile Learn more at: http://labonnesoupe.org https://twitter.com/chrscmpbll

Sonos

SONOS is SONOS backwards. It’s also SONOS upside down, and SONOS upside down and backwards. I just learnt that this means SONOS is an ambigram. Hurray for learning!

Sonos (the product, not the ambigram) is a multi-room speaker system controlled by an app. Speakers in different rooms can play different tracks or join forces to play one track for a smooth musical atmosphere throughout your home.

If you have a Sonos system in your home, I would highly recommend accessing to it from outside your home and set it to play the Imperial March as you walk through the front door. Why wouldn’t you?

qrocodile

One day, Chris’s young children wanted to play an album while eating dinner. By this one request, he was inspired to create qrocodile, a musical jukebox enabling his children to control the songs Sonos plays, and where it plays them, via QR codes.

It all started one night at the dinner table over winter break. The kids wanted to put an album on the turntable (hooked up to the line-in on a Sonos PLAY:5 in the dining room). They’re perfectly capable of putting vinyl on the turntable all by themselves, but using the Sonos app to switch over to play from the line-in is a different story.

The QR codes represent commands (such as Play in the living room,Use the turntable, or Build a song list) and artists (such as my current musical crush Courtney Barnett or the Ramones).

A camera attached to a Raspberry Pi 3 feeds the Pi the QR code that’s presented, and the Pi runs a script that recognises the code and sends instructions to Sonos accordingly.

Chris used a costum version of the Sonos HTTP API created by Jimmy Shimizu to gain access to Sonos from his Raspberry Pi. To build the QR codes, he wrote a script that utilises the Spotify API via the Spotipy library.

His children are now able to present recognisable album art to the camera in order to play their desired track.

It’s been interesting seeing the kids putting the thing through its paces during their frequent “dance parties”, queuing up their favorite songs and uncovering new ones. I really like that they can use tangible objects to discover music in much the same way I did when I was their age, looking through my parents records, seeing which ones had interesting artwork or reading the song titles on the back, listening and exploring.

Chris has provided all the scripts for the project, along with a tutorial of how to set it up, on his GitHub — have a look if you want to recreate it or learn more about his code. Also check out Chris’ website for more on qrocodile and to see some of his other creations.

StarWind provides “VTL” (Virtual Tape Library) technology that enables users to back up their “VMs” (virtual machines) from Veeam to on-premise or cloud storage. StarWind does this using standard “LTO” (Linear Tape-Open) protocols. This appeals to organizations that have LTO in place since it allows adoption of more scalable, cost efficient cloud storage without having to update the internal backup infrastructure.

Why An Additional Backup in the Cloud?

Common backup strategy, known as 3-2-1, dictates having three copies at a minimum of active data. Two copies are stored locally and one copy is in another location.

Relying solely on on-site redundancy does not guarantee data protection after a catastrophic or temporary loss of service affecting the primary data center. To reach maximum data security, an on-premises private cloud backup combined with an off-site public cloud backup, known as hybrid cloud, provides the best combination of security and rapid recovery when required.

Why Consider a Hybrid Cloud Solution?

The Hybrid Cloud Provides Superior Disaster Recovery and Business Continuity

Having a backup strategy that combines on-premise storage with public cloud storage in a single or multi-cloud configuration is becoming the solution of choice for organizations that wish to eliminate dependence on vulnerable on-premises storage. It also provides reliable and rapidly deployed recovery when needed.

If an organization requires restoration of service as quickly as possible after an outage or disaster, it needs to have a backup that isn’t dependent on the same network. That means a backup stored in the cloud that can be restored to another location or cloud-based compute service and put into service immediately after an outage.

Hybrid Cloud Example: VTL and the Cloud

Some organizations will already have made a significant investment in software and hardware that supports LTO protocols. Specifically, they are using Veeam to back up their VMs onto physical tape. Using StarWind to act as a VTL with Veeam enables users to save time and money by connecting their on-premises Veeam Backup & Replication archives to Backblaze B2 Cloud Storage.

Why Veeam, StarWind VTL, and Backblaze B2?

What are the primary reasons that an organization would want to adopt Veeam + StarWind VTL + B2 as a hybrid cloud backup solution?

You are already invested in Veeam along with LTO software and hardware.

You require rapid and reliable recovery of service should anything disrupt your primary data center.

Having a backup in the cloud with B2 provides an economical primary or secondary cloud storage solution and enables fast restoration to a current or alternate location, as well as providing the option to quickly bring online a cloud-based compute service, thereby minimizing any loss of service and ensuring business continuity. Backblaze’s B2 is an ideal solution for backing up Veeam’s backup repository due to B2’s combination of low-cost and high availability compared to other cloud solutions such as Microsoft Azure or Amazon AWS.

This is a customer post by Ajay Rathod, a Staff Data Engineer at Realtor.com.

Realtor.com, in their own words: Realtor.com®, operated by Move, Inc., is a trusted resource for home buyers, sellers, and dreamers. It offers the most comprehensive database of for-sale properties, among competing national sites, and the information, tools, and professional expertise to help people move confidently through every step of their home journey.

Move, Inc. processes hundreds of terabytes of data partitioned by day and hour. Various teams run hundreds of queries on this data. Using AWS services, Move, Inc. has built an infrastructure for gathering and analyzing data:

To increase the effectiveness of the storage and subsequent querying, the data is converted into a Parquet format, and stored again in S3.

Amazon Athena is used as the SQL (Structured Query Language) engine to query the data in S3. Athena is easy to use and is often quickly adopted by various teams.

Teams visualize query results in Amazon QuickSight. Amazon QuickSight is a business analytics service that allows you to quickly and easily visualize data and collaborate with other users in your account.

This architecture is known as the data platform and is shared by the data science, data engineering, and the data operations teams within the organization. Move, Inc. also enables other cross-functional teams to use Athena. When many users use Athena, it helps to monitor its usage to ensure cost-effectiveness. This leads to a strong need for Athena metrics that can give details about the following:

Users

Amount of data scanned (to monitor the cost of AWS service usage)

The databases used for queries

Actual queries that teams run

Currently, the Move, Inc. team does not have an easy way of obtaining all these metrics from a single tool. Having a way to do this would greatly simplify monitoring efforts. For example, the data operations team wants to collect several metrics every day obtained from queries run on Athena for their data. They require the following metrics:

Amount of data scanned by each user

Number of queries by each user

Databases accessed by each user

In this post, I discuss how to build a solution for monitoring Athena usage. To build this solution, you rely on AWS CloudTrail. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an S3 bucket.

Solution

Here is the high-level overview:

Use the CloudTrail API to audit the user queries, and then use Athena to create a table from the CloudTrail logs.

Query the Athena API with the AWS CLI to gather metrics about the data scanned by the user queries and put this information into another table in Athena.

Combine the information from these two sources by joining the two tables.

Use the resulting data to analyze, build insights, and create a dashboard that shows the usage of Athena by users within different teams in the organization.

Step 1: Create a table in Athena for data in CloudTrail

The CloudTrail API records all Athena queries run by different teams within the organization. These logs are saved in S3. The fields of most interest are:

User identity

Start time of the API call

Source IP address

Request parameters

Response elements returned by the service

When end users make queries in Athena, these queries are recorded by CloudTrail as responses from Athena web service calls. In these responses, each query is represented as a JSON (JavaScript Object Notation) string.

You can use the following CREATE TABLE statement to create the cloudtrail_logs table in Athena. For more information, see Querying CloudTrail Logs in the Athena documentation.

Step 2: Create a table in Amazon Athena for data from API output

Athena provides an API that can be queried to obtain information of a specific query ID. It also provides an API to obtain information of a batch of query IDs, with a batch size of up to 50 query IDs.

You can use this API call to obtain information about the Athena queries that you are interested in and store this information in an S3 location. Create an Athena table to represent this data in S3. For the purpose of this post, the response fields that are of interest are as follows:

You can inspect the query IDs and user information for the last day. The query is as follows:

with data AS (
SELECT
json_extract(responseelements,
'$.queryExecutionId') AS query_id,
(useridentity.arn) AS uid,
(useridentity.sessioncontext.sessionIssuer.userName) AS role,
from_iso8601_timestamp(eventtime) AS dt
FROM cloudtrail_logs
WHERE eventsource='athena.amazonaws.com'
AND eventname='StartQueryExecution'
AND json_extract(responseelements, '$.queryExecutionId') is NOT null)
SELECT *
FROM data
WHERE dt > date_add('day',-1,now() )

Step 3: Obtain Query Statistics from Athena API

You can write a simple Python script to loop through queries in batches of 50 and query the Athena API for query statistics. You can use the Boto library for these lookups. Boto is a library that provides you with an easy way to interact with and automate your AWS development. The response from the Boto API can be parsed to extract the fields that you need as described in Step 2.

An example Python script is available in the AthenaMetrics GitHub repo.

Format these fields, for each query ID, as CSV strings and store them for the entire batch response in an S3 bucket. This S3 bucket is represented by the table created in Step 2, cloudtrail_logs.

In your Python code, create a variable named sql_query, and assign it a string representing the SQL query defined in Step 2. The s3_query_folder is the location in S3 that is used by Athena for storing results of the query. The code is as follows:

You can iterate through the results in the response object and consolidate them in batches of 50 results. For each batch, you can invoke the Athena API, batch-get-query-execution.

Store the output in the S3 location pointed to by the CREATE TABLE definition for the table athena_api_output, in Step 2. The SQL statement above returns only queries run in the last 24 hours. You may want to increase that to get usage over a longer period of time. The code snippet for this API call is as follows:

The batchqueryids value is an array of 50 query IDs extracted from the result set of the SELECT query. This script creates the data needed by your second table, athena_api_output, and you are now ready to join both tables in Athena.

Step 4: Join the CloudTrail and Athena API data

Now that the two tables are available with the data that you need, you can run the following Athena query to look at the usage by user. You can limit the output of this query to the most recent five days.

Conclusion

Using the solution described in this post, you can continuously monitor the usage of Athena by various teams. Taking this a step further, you can automate and set user limits for how much data the Athena users in your team can query within a given period of time. You may also choose to add notifications when the usage by a particular user crosses a specified threshold. This helps you manage costs incurred by different teams in your organization.

Realtor.com would like to acknowledge the tremendous support and guidance provided by Hemant Borole, Senior Consultant, Big Data & Analytics with AWS Professional Services in helping to author this post.

About the Author

Ajay Rathod is Staff Data Engineer at Realtor.com. With a deep background in AWS Cloud Platform and Data Infrastructure, Ajay leads the Data Engineering and automation aspect of Data Operations at Realtor.com. He has designed and deployed many ETL pipelines and workflows for the Realtor Data Analytics Platform using AWS services like Data Pipeline, Athena, Batch, Glue and Boto3. He has created various operational metrics to monitor ETL Pipelines and Resource Usage.

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Flight status

We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!

But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook’s global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

“If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States,” Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc’s Google also spoke.

“It won’t solve everything,” Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Last January it was revealed that after things had become tricky in the US, the copyright trolls behind the action movie London Has Fallen were testing out the Norwegian market.

Reports emerged of letters being sent out to local Internet users by Danish law firm Njord Law, each demanding a cash payment of 2,700 NOK (around US$345). Failure to comply, the company claimed, could result in a court case and damages of around $12,000.

The move caused outrage locally, with consumer advice groups advising people not to pay and even major anti-piracy groups distancing themselves from the action. However, in May 2017 it appeared that progress had been made in stopping the advance of the trolls when another Njord Law case running since 2015 hit the rocks.

The law firm previously sent a request to the Oslo District Court on behalf of entertainment company Scanbox asking ISP Telenor to hand over subscribers’ details. In May 2016, Scanbox won its case and Telenor was ordered to hand over the information.

On appeal, however, the tables were turned when it was decided that evidence supplied by the law firm failed to show that sharing carried out by subscribers was substantial.

Undeterred, Njord Law took the case all the way to the Supreme Court. The company lost when a panel of judges found that the evidence presented against Telenor’s customers wasn’t good enough to prove infringement beyond a certain threshold. But Njord Law still wasn’t done.

More than six months on, the ruling from the Supreme Court only seems to have provided the company with a template. If the law firm could show that the scale of sharing exceeds the threshold set by Norway’s highest court, then disclosure could be obtained. That appears to be the case now.

In a ruling handed down by the Oslo District Court in January, it’s revealed that Njord Law and its partners handed over evidence which shows 23,375 IP addresses engaged in varying amounts of infringing behavior over an extended period. The ISP they have targeted is being kept secret by the court but is believed to be Telenor.

Using information supplied by German anti-piracy outfit MaverickEye (which is involved in numerous copyright troll cases globally), Njord Law set out to show that the conduct of the alleged pirates had been exceptional for a variety of reasons, categorizing them variously (but non-exclusively) as follows:

– IP addresses involved in BitTorrent swarm sizes greater than 10,000 peers/pirates
– IP addresses that have shared at least two of the plaintiffs’ movies
– IP addresses making available the plaintiffs’ movies on at least two individual days
– IP addresses that made available at least ten movies in total
– IP addresses that made available different movies on at least ten individual days
– IP addresses that made available movies from businesses and public institutions

While rejecting some categories, the court was satisfied that 21,804 IP addresses of the 23,375 IP addresses presented by Njord Law met or exceeded the criteria for disclosure. It’s still not clear how many of these IP addresses identify unique subscribers but many thousands are expected.

“For these users, it has been established that the gravity, extent, and harm of the infringement are so great that consideration for the rights holder’s interests in accessing information identifying the [allegedly infringing] subscribers is greater than the consideration of the subscribers’,” the court writes in its ruling.

“Users’ confidence that their private use of the Internet is protected from public access is a generally important factor, but not in this case where illegal file sharing has been proven. Nor has there been any information stating that the offenders in the case are children or anything else which implies that disclosure of information about the holder of the subscriber should be problematic.”

While the ISP (Telenor) will now have to spend time and resources disclosing its subscribers’ personal details to the law firm, it will be compensated for its efforts. The Oslo District Court has ordered Njord Law to pay costs of NOK 907,414 (US$115,822) plus NOK 125 (US$16.00) for every IP address and associated details it receives.

The decision can be appealed but when contacted by Norwegian publication Nettavisen, Telenor declined to comment on the case.

There is now the question of what Njord Law will do with the identities it obtains. It seems very likely that it will ask for a sum of money to make a potential lawsuit go away but it will still need to take an individual subscriber to court in order to extract payment, if they refuse to pay.

This raises the challenge of proving that the subscriber is the actual infringer when it could be anyone in a household. But that battle will have to wait until another day.

The full decision of the Oslo District Court can be found here (Norwegian)

Many organizations, particularly enterprises, rely on message brokers to connect and coordinate different systems. Message brokers enable distributed applications to communicate with one another, serving as the technological backbone for their IT environment, and ultimately their business services. Applications depend on messaging to work.

In many cases, those organizations have started to build new or “lift and shift” applications to AWS. In some cases, there are applications, such as mainframe systems, too costly to migrate. In these scenarios, those on-premises applications still need to interact with cloud-based components.

Amazon MQ is a managed message broker service for ActiveMQ that enables organizations to send messages between applications in the cloud and on-premises to enable hybrid environments and application modernization. For example, you can invoke AWS Lambda from queues and topics managed by Amazon MQ brokers to integrate legacy systems with serverless architectures. ActiveMQ is an open-source message broker written in Java that is packaged with clients in multiple languages, Java Message Server (JMS) client being one example.

This post shows you can use Amazon MQ to integrate on-premises and cloud environments using the network of brokers feature of ActiveMQ. It provides configuration parameters for a one-way duplex connection for the flow of messages from an on-premises ActiveMQ message broker to Amazon MQ.

ActiveMQ and the network of brokers

First, look at queues within ActiveMQ and then at the network of brokers as a mechanism to distribute messages.

The network of brokers behaves differently from models such as physical networks. The key consideration is that the production (sending) of a message is disconnected from the consumption of that message. Think of the delivery of a parcel: The parcel is sent by the supplier (producer) to the end customer (consumer). The path it took to get there is of little concern to the customer, as long as it receives the package.

The same logic can be applied to the network of brokers. Here’s how you build the flow from a simple message to a queue and build toward a network of brokers. Before you look at setting up a hybrid connection, I discuss how a broker processes messages in a simple scenario.

When a message is sent from a producer to a queue on a broker, the following steps occur:

A message is sent to a queue from the producer.

The broker persists this in its store or journal.

At this point, an acknowledgement (ACK) is sent to the producer from the broker.

When a consumer looks to consume the message from that same queue, the following steps occur:

The message listener (consumer) calls the broker, which creates a subscription to the queue.

Messages are fetched from the message store and sent to the consumer.

The consumer acknowledges that the message has been received before processing it.

Upon receiving the ACK, the broker sets the message as having been consumed. By default, this deletes it from the queue.

You can set the consumer to ACK after processing by setting up transactionmanagement or handle it manually using Session.CLIENT_ACKNOWLEDGE.

Static propagation

I now introduce the concept of static propagation with the network of brokers as the mechanism for message transfer from on-premises brokers to Amazon MQ. Static propagation refers to message propagation that occurs in the absence of subscription information. In this case, the objective is to transfer messages arriving at your selected on-premises broker to the Amazon MQ broker for consumption within the cloud environment.

After you configure static propagation with a network of brokers, the following occurs:

The on-premises broker receives a message from a producer for a specific queue.

The on-premises broker sends (statically propagates) the message to the Amazon MQ broker.

The Amazon MQ broker sends an acknowledgement to the on-premises broker, which marks the message as having been consumed.

Amazon MQ holds the message in its queue ready for consumption.

A consumer connects to Amazon MQ broker, subscribes to the queue in which the message resides, and receives the message.

For Deployment mode, enter one of the following: – Single-instance broker for development and test implementations (recommended) – Active/standby broker for high availability in production environments

Scroll down and enter your user name and password.

Expand Advanced Settings.

For VPC, Subnet, and Security Group, pick the values for the resources in which your broker will reside.

For Public Accessibility, choose Yes, as connectivity is internet-based. Another option would be to use private connectivity between your on-premises network and the VPC, an example being an AWS Direct Connect or VPN connection. In that case, you could set Public Accessibility to No.

For Maintenance, leave the default value, No preference.

Choose Create Broker. Wait several minutes for the broker to be created.

After creation is complete, you see your broker listed.

For connectivity to work, you must configure the security group where Amazon MQ resides. For this post, I focus on the OpenWire protocol.

Configuring the network of brokers

Configuring the network of brokers with static propagation occurs on the on-premises broker by applying changes to the following file: <activemq install directory>/conf activemq.xml

Network connector

This is the first configuration item required to enable a network of brokers. It is only required on the on-premises broker, which initiates and creates the connection with Amazon MQ. This connection, after it’s established, enables the flow of messages in either direction between the on-premises broker and Amazon MQ. The focus of this post is the uni-directional flow of messages from the on-premises broker to Amazon MQ.

The default activemq.xml file does not include the network connector configuration. Add this with the networkConnector element. In this scenario, edit the on-premises broker activemq.xml file to include the following information between <systemUsage> and <transportConnectors>:

The highlighted components are the most important elements when configuring your on-premises broker.

name – Name of the network bridge. In this case, it specifies two things:

That this connection relates to an ActiveMQ queue (Q) as opposed to a topic (T), for reference purposes.

The source broker and target broker.

duplex –Setting this to false ensures that messages traverse uni-directionally from the on-premises broker to Amazon MQ.

uri –Specifies the remote endpoint to which to connect for message transfer. In this case, it is an Openwire endpoint on your Amazon MQ broker. This information could be obtained from the Amazon MQ console or via the API.

username and password – The same username and password configured when creating the Amazon MQ broker, and used to access the Amazon MQ ActiveMQ console.

networkTTL – Number of brokers in the network through which messages and subscriptions can pass. Leave this setting at the current value, if it is already included in your broker connection.

staticallyIncludedDestinations > queue physicalName – The destination ActiveMQ queue for which messages are destined. This is the queue that is propagated from the on-premises broker to the Amazon MQ broker for message consumption.

After the network connector is configured, you must restart the ActiveMQ service on the on-premises broker for the changes to be applied.

Verify the configuration

There are a number of places within the ActiveMQ console of your on-premises and Amazon MQ brokers to browse to verify that the configuration is correct and the connection has been established.

On-premises broker

Launch the ActiveMQ console of your on-premises broker and navigate to Network. You should see an active network bridge similar to the following:

This identifies that the connection between your on-premises broker and your Amazon MQ broker is up and running.

Now navigate to Connections and scroll to the bottom of the page. Under the Network Connectors subsection, you should see a connector labeled with the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

Amazon MQ broker

Launch the ActiveMQ console of your Amazon MQ broker and navigate to Connections. Scroll to the Connections openwire subsection and you should see a connection specified that references the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

If you configured the uri: for AMQP, STOMP, MQTT, or WSS as opposed to Openwire, you would see this connection under the corresponding section of the Connections page.

Testing your message flow

The setup described outlines a way for messages produced on premises to be propagated to the cloud for consumption in the cloud. This section provides steps on verifying the message flow.

Verify that the queue has been created

After you specify this queue name as staticallyIncludedDestinations > queue physicalName: and your ActiveMQ service starts, you see the following on your on-premises ActiveMQ console Queues page.

As you can see, no messages have been sent but you have one consumer listed. If you then choose Active Consumers under the Views column, you see Active Consumers for TestingQ.

This is telling you that your Amazon MQ broker is a consumer of your on-premises broker for the testing queue.

Produce and send a message to the on-premises broker

Now, produce a message on an on-premises producer and send it to your on-premises broker to a queue named TestingQ. If you navigate back to the queues page of your on-premises ActiveMQ console, you see that the messages enqueued and messages dequeued column count for your TestingQ queue have changed:

What this means is that the message originating from the on-premises producer has traversed the on-premises broker and propagated immediately to the Amazon MQ broker. At this point, the message is no longer available for consumption from the on-premises broker.

If you access the ActiveMQ console of your Amazon MQ broker and navigate to the Queues page, you see the following for the TestingQ queue:

This means that the message originally sent to your on-premises broker has traversed the network of brokers unidirectional network bridge, and is ready to be consumed from your Amazon MQ broker. The indicator is the Number of Pending Messages column.

Consume the message from an Amazon MQ broker

Connect to the Amazon MQ TestingQ queue from a consumer within the AWS Cloud environment for message consumption. Log on to the ActiveMQ console of your Amazon MQ broker and navigate to the Queue page:

As you can see, the Number of Pending Messages column figure has changed to 0 as that message has been consumed.

This diagram outlines the message lifecycle from the on-premises producer to the on-premises broker, traversing the hybrid connection between the on-premises broker and Amazon MQ, and finally consumption within the AWS Cloud.

Conclusion

This post focused on an ActiveMQ-specific scenario for transferring messages within an ActiveMQ queue from an on-premises broker to Amazon MQ.

For other on-premises brokers, such as IBM MQ, another approach would be to run ActiveMQ on-premises broker and use JMS bridging to IBM MQ, while using the approach in this post to forward to Amazon MQ. Yet another approach would be to use Apache Camel for more sophisticated routing.

I hope that you have found this example of hybrid messaging between an on-premises environment in the AWS Cloud to be useful. Many customers are already using on-premises ActiveMQ brokers, and this is a great use case to enable hybrid cloud scenarios.

To learn more, see the Amazon MQ website and Developer Guide. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even “defense in depth” security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification — the art and science of protecting a place by imposing a barrier between you and an enemy — is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: “We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures.” The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There’s been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian’s Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

ISP blocking has become a prime measure for the entertainment industry to target pirate sites on the Internet.

In recent years sites have been blocked throughout Europe, in Asia, and even Down Under.

Last month, a coalition of Canadian companies called on the local telecom regulator CRTC to establish a local pirate site blocking program, which would be the first of its kind in North America.

The Canadian deal is backed by both copyright holders and major players in the Telco industry, such as Bell and Rogers, which also have media companies of their own. Instead of court-ordered blockades, they call for a mutually agreed deal where ISPs will block pirate sites.

The plan has triggered a fair amount of opposition. Tens of thousands of people have protested against the proposal and several experts are warning against the negative consequences it may have.

One of the most vocal opponents is University of Ottawa law professor Micheal Geist. In a series of articles, processor Geist highlighted several problems, including potential overblocking.

The Fairplay Canada coalition downplays overblocking, according to Geist. They say the measures will only affect sites that are blatantly, overwhelmingly or structurally engaged in piracy, which appears to be a high standard.

However, the same coalition uses a report from MUSO as its primary evidence. This report draws on a list of 23,000 pirate sites, which may not all be blatant enough to meet the blocking standard.

For example, professor Geist notes that it includes a site dedicated to user-generated subtitles as well as sites that offer stream ripping tools which can be used for legal purposes.

“Stream ripping is a concern for the music industry, but these technologies (which are also found in readily available software programs from a local BestBuy) also have considerable non-infringing uses, such as for downloading Creative Commons licensed videos also found on video sites,” Geist writes.

If the coalition tried to have all these sites blocked the scope would be much larger than currently portrayed. Conversely, if only a few of the sites would be blocked, then the evidence that was used to put these blocks in place would have been exaggerated.

“In other words, either the scope of block list coverage is far broader than the coalition admits or its piracy evidence is inflated by including sites that do not meet its piracy standard,” Geist notes.

Perhaps most concerning is the slippery slope that the blocking efforts can turn into. Professor Geist fears that after the standard piracy sites are dealt with, related targets may be next.

This includes VPN services. While this may sound far-fetched to some, several members of the coalition, such as Bell and Rogers, have already criticized VPNs in the past since these allow people to watch geo-blocked content.

“Once the list of piracy sites (whatever the standard) is addressed, it is very likely that the Bell coalition will turn its attention to other sites and services such as virtual private networks (VPNs).

“This is not mere speculation. Rather, it is taking Bell and its allies at their word on how they believe certain services and sites constitute theft,” Geist adds.

The issue may even be more relevant in this case, since the same VPNs can also be used to circumvent pirate sites blockades.

“Further, since the response to site blocking from some Internet users will surely involve increased use of VPNs to evade the blocks, the attempt to characterize VPNs as services engaged in piracy will only increase,” Geist adds.

Potential overblocking is just one of the many issues with the current proposal, according to the law professor. Geist previously highlighted that current copyright law already provides sufficient remedies to deal with piracy and that piracy isn’t that much of a problem in Canada in the first place.

The CRTC has yet to issue its review of the proposal but now that the cat is out of the bag, rightsholders and ISPs are likely to keep pushing for blockades, one way or the other.

Anti-piracy systems and DRM come in all shapes and sizes, none of them particularly popular, but one deployed by flight sim company FlightSimLabs is likely to go down in history as one of the most outrageous.

It all started yesterday on Reddit when Flight Sim user ‘crankyrecursion’ reported a little extra something in his download of FlightSimLabs’ A320X module.

“Using file ‘FSLabs_A320X_P3D_v2.0.1.231.exe’ there seems to be a file called ‘test.exe’ included,” crankyrecursion wrote.

“This .exe file is from http://securityxploded.com and is touted as a ‘Chrome Password Dump’ tool, which seems to work – particularly as the installer would typically run with Administrative rights (UAC prompts) on Windows Vista and above. Can anyone shed light on why this tool is included in a supposedly trusted installer?”

The existence of a Chrome password dumping tool is certainly cause for alarm, especially if the software had been obtained from a less-than-official source, such as a torrent or similar site, given the potential for third-party pollution.

However, with the possibility of a nefarious third-party dumping something nasty in a pirate release still lurking on the horizon, things took an unexpected turn. FlightSimLabs chief Lefteris Kalamaras made a statement basically admitting that his company was behind the malware installation.

“We were made aware there is a Reddit thread started tonight regarding our latest installer and how a tool is included in it, that indiscriminately dumps Chrome passwords. That is not correct information – in fact, the Reddit thread was posted by a person who is not our customer and has somehow obtained our installer without purchasing,” Kalamaras wrote.

“[T]here are no tools used to reveal any sensitive information of any customer who has legitimately purchased our products. We all realize that you put a lot of trust in our products and this would be contrary to what we believe.

“There is a specific method used against specific serial numbers that have been identified as pirate copies and have been making the rounds on ThePirateBay, RuTracker and other such malicious sites,” he added.

In a nutshell, FlightSimLabs installed a password dumper onto ALL users’ machines, whether they were pirates or not, but then only activated the password-stealing module when it determined that specific ‘pirate’ serial numbers had been used which matched those on FlightSimLabs’ servers.

“Test.exe is part of the DRM and is only targeted against specific pirate copies of copyrighted software obtained illegally. That program is only extracted temporarily and is never under any circumstances used in legitimate copies of the product,” Kalamaras added.

That didn’t impress Luke Gorman, who published an analysis slamming the flight sim company for knowingly installing password-stealing malware on users machines, even those who purchased the title legitimately.

Password stealer in action (credit: Luke Gorman)

Making matters even worse, the FlightSimLabs chief went on to say that information being obtained from pirates’ machines in this manner is likely to be used in court or other legal processes.

“This method has already successfully provided information that we’re going to use in our ongoing legal battles against such criminals,” Kalamaras revealed.

While the use of the extracted passwords and usernames elsewhere will remain to be seen, it appears that FlightSimLabs has had a change of heart. With immediate effect, the company is pointing customers to a new installer that doesn’t include code for stealing their most sensitive data.

“I want to reiterate and reaffirm that we as a company and as flight simmers would never do anything to knowingly violate the trust that you have placed in us by not only buying our products but supporting them and FlightSimLabs,” Kalamaras said in an update.

“While the majority of our customers understand that the fight against piracy is a difficult and ongoing battle that sometimes requires drastic measures, we realize that a few of you were uncomfortable with this particular method which might be considered to be a bit heavy handed on our part. It is for this reason we have uploaded an updated installer that does not include the DRM check file in question.”

Amazon Cognito lets you easily add user sign-up, sign-in, and access control to your mobile and web apps. You can use fully managed user directories, called Amazon Cognito user pools, to create accounts for your users, allow them to sign in, and update their profiles. Your users also can sign in by using external identity providers (IdPs) by federating with Amazon, Google, Facebook, SAML, or OpenID Connect (OIDC)–based IdPs. If your app is backed by resources, Amazon Cognito also gives you tools to manage permissions for accessing resources through AWS Identity and Access Management (IAM) roles and policies, and through integration with Amazon API Gateway.

In this post, I explain some new advanced security features (in beta) that were launched at AWS re:Invent 2017 for Amazon Cognito user pools and how to use them. Note that separate prices apply to these advanced security features, as described on our pricing page.

The new advanced security features of Amazon Cognito

Security is the top priority for Amazon Cognito. We handle user authentication and authorization to control access to your web and mobile apps, so security is vital. The new advanced security features add additional protections for your users that you manage in Amazon Cognito user pools. In particular, we have added protection against compromised credentials and risk-based adaptive authentication.

Compromised credentials protection

Our compromised credentials feature protects your users’ accounts by preventing your users from reusing credentials (a user name and password pair) that have been exposed elsewhere. This new feature addresses the issue of users reusing the same credentials for multiple websites and apps. For example, a user might use the same email address and password to sign in to multiple websites.

A security best practice is to never use the same user name password in different systems. If an attacker is able to obtain user credentials through a breach of one system, they could use those user credentials to access other systems. AWS has been able to form partnerships and programs so that Amazon Cognito is informed when a set of credentials has been compromised elsewhere. When you use compromised credentials protection in Amazon Cognito, you can prevent users of your application from signing up, signing in, and changing their password with credentials that are recognized as having been compromised. If a user attempts to use credentials that we detect have been compromised, that user is required to choose a different password.

Risk-based adaptive authentication

The other major advanced security feature we launched at AWS re:Invent 2017 is risk-based adaptive authentication. Adaptive authentication protects your users from attempts to compromise their accounts—and it does so intelligently to minimize any inconvenience for your customers. With adaptive authentication, Amazon Cognito examines each user pool sign-in attempt and generates a risk score for how likely the sign-in request is to be from a malicious attacker.

Amazon Cognito examines a number of factors, including whether the user has used the same device before, or has signed in from the same location or IP address. A detected risk is rated as low, medium, or high, and you can determine what actions should be taken at each risk level. You can choose to block the request if the risk level is high, or you can choose to require a second factor of authentication, in addition to the password, for the user to sign in using multi-factor authentication (MFA). With adaptive authentication, users continue to sign in with just their password when the request has characteristics of successful sign-ins in the past. Users are prompted for a second factor only when some risk is detected with a sign-in request.

How to configure the advanced security features

Now that I’ve described the new advanced security features, I will show how to configure them for your mobile or web app. You have to create an Amazon Cognito user pool in the console and save it before you can see the advanced security settings.

First you must create and configure an Amazon Cognito user pool:

Go to the Amazon Cognito console, and choose Manage your User Pools to get started. If you already have a user pool that you can work with, choose that user pool. Otherwise, choose Create a user pool to create a new one.

On the MFA and verifications tab (see the following screenshot), enable MFA as Optional so that your individual users can choose to configure second factors of authentication, which are needed for adaptive authentication. (If you were to choose Required as the MFA setting for your user pool instead, all sign-ins would require a second factor of authentication. This would effectively disable adaptive authentication because a second factor of authentication would always be required.)

You should also enable at least one second factor of authentication. As shown in the following screenshot, I have enabled both SMS text message and Time-based One-time Password (TOTPs).

On the App clients tab, create an app client by choosing add an app client, entering a name, and choosing Create app client.

Second, configure the advanced security features:

After you’ve configured and saved your user pool, you will see the Advanced security tab, as shown in the following screenshot. You can choose one of three modes for enabling the advanced security features: Yes, Audit only, and No:

If you choose No, the advanced features are all turned off.

If you choose Audit only, Amazon Cognito logs all related events to CloudWatch metrics so that you can see what risks are detected, but Amazon Cognito doesn’t take any explicit actions to protect your users. Use the Audit only mode to understand what events are happening before you fully turn on the advanced security features.

If you choose Yes, you turn on the advanced security features. We recommend that you initially run the advanced security features in Audit only mode for two weeks before choosing Yes.

When you choose Yes to turn on the advanced security features, configuration options appear, as shown in the following screenshot:

First, choose if you want to configure default settings for all of your app clients, or if you want to configure settings for a specific app client. As shown in the following screenshot, you can see that I’ve chosen global default settings for all my app clients.

Next, choose the action you want to take when compromised credentials are detected. You can either Allow compromised credentials, or you can Block use of them. If you want to protect your users, you should choose Block use. However, you first can watch the metrics in CloudWatch without taking action by choosing Allow. You also can choose Customize when compromised credentials are blocked, which allows you to choose for which operations—sign up, sign in, and forgotten password—Amazon Cognito will detect and block use of compromised credentials.

The next section on the Advanced security tab includes the configuration for adaptive authentication. For each risk level (Low, Medium, and High), you can require a second factor for MFA or you can block the request, and you can notify users about the events through email. You have two MFA choices for each risk level:

Optional MFA – Requires a second factor at that risk level for all users who have configured either SMS or TOTP as a second factor of authentication. Users who haven’t configured a second factor are allowed to sign in without a second factor. For optional MFA, you should encourage your users to configure a second factor of authentication for added security, but users who haven’t configured a second factor aren’t blocked from signing in.

Require MFA – Requires a second factor of authentication from all users when a risk is detected, so any users who haven’t configured a second factor are blocked from signing in at any risk level that requires MFA.

Block – Blocks the sign-in attempt.

Notify users – Sends an email to the users to notify them about the sign-in attempt. You can customize the emails as described below.

In the next section on the Advanced security tab, you can customize the email notifications that Amazon Cognito sends to your users if you have selected Notify users. Amazon Cognito sends these notification emails through Amazon Simple Email Service (Amazon SES). If you haven’t already, you should go to the Amazon SES console to configure and verify an email address or domain so that you can use it as the FROM email address for the notification emails that Amazon Cognito sends.

You can customize the email subject and body for the email notifications with both HTML and plain text versions, as shown in the following screenshot.

Optionally, you can enter IP addresses that you either want to Always allow by bypassing the compromised credentials and adaptive authentication features, or to Always block. For example, if you have a site where you do testing and development, you might want to include the IP address range from that site in the Always allow list so that it doesn’t get mistaken as a risky sign-in attempt.

That’s all it takes to configure the advanced security features in the Amazon Cognito console.

Enabling the advanced security features from you app

After you have configured the advanced security features for your user pool, you need to enable them in your mobile or web app. First you need to include a version of our SDK that is recent enough to support the features, and second in some cases, you need to set some values for iOS, Android, and JavaScript.

iOS: If you’re building your own user interface to sign in users and integrating the Amazon Cognito Identity Provider SDK, use at least version 2.6.7 of the SDK. If you’re using the Amazon Cognito Auth SDK to incorporate the customizable, hosted user interface to sign in users, also use at least version 2.6.7. If you’re configuring the Auth SDK by using Info.plist, add the PoolIdForEnablingASF key to your Amazon Cognito user pool configuration, and set it to your user pool ID. If you’re configuring the Auth SDK by using AWSCognitoAuthConfiguration, use this initializer and specify your user pool ID as userPoolIdForEnablingASF. For more details, see the CognitoAuth sample app.

JavaScript: If you’re using the Amazon Cognito Auth JS SDK to incorporate the customizable, hosted UI to sign in users, use at least version 1.1.0 of the SDK. To configure the advanced security features, add the AdvancedSecurityDataCollectionFlag parameter and set it as true. Also add the UserPoolId parameter and set it to your user pool ID. In your application, you need to include "https://amazon-cognito-assets.<region>.amazoncognito.com/amazon-cognito-advanced-security-data.min.js" to collect data about requests. For more details, see the README.md of the Auth JavaScript SDK and the SAMPLEREADME.md of the web app sample. If you’re using the Amazon Cognito Identity SDK to build your own UI, use at least version 1.28.0 of the SDK.

Some examples of the advanced security features in action

Now that I have configured these advanced security features, let’s look at them in action. I’m using the customizable, hosted sign-up and sign-in screens that are built into Amazon Cognito user pools. I’ve done some minimal customization, and my sign-up page is shown in the following screenshot.

With the compromised credentials feature, if a user tries to sign up with credentials that have been exposed at another site, the user is told they cannot use that password for security reasons.

If a user signs in, Amazon Cognito detects a risk, and you have configured adaptive authentication, the user is asked for a second factor of authentication. The following screenshot shows an example of an SMS text message used for MFA. After the user enters a valid code from their phone, they’re successfully signed in.

As I mentioned earlier in this post, Amazon Cognito also can notify your users whenever there’s a sign-in attempt that’s determined to have some risk. The following screenshot shows a basic example of a notification message, and you can customize these messages, as described previously.

The advanced security features also provide aggregate metrics and event histories for individual users. You can view the aggregate metrics in the CloudWatch console. Navigate to the Metrics section under Cognito. When you’re graphing, choose the Graphed metrics tab and choose Sum as the Statistic.

You can view the event histories for users in the Amazon Cognito console on the Users and groups tab. When you choose an individual user, you see that user’s event history listed under their profile information. As the following screenshot shows, you can see information about users’ events, including the date and time, the event type, the risk detected, and location. The event history includes the Risk level that indicates the Low, Medium, or High ratings described earlier and the Risk decision that indicates if a risk was detected and what type.

When you choose an entry, you see the event details and the option to Mark event asvalid if it was from the user, or Mark event as invalid if it wasn’t.

Summary

You can use these advanced security features of Amazon Cognito user pools to protect your users from compromised credentials and attempts to compromise their user pool–based accounts in your app. You also can customize the actions taken in response to different risks, or you can use audit mode to gather metrics on detected risks without taking action. For more information about using these features, see the Amazon Cognito Developer Guide.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about how to configure or use these features, start a new thread on the Amazon Cognito forum or contact AWS Support.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

This column is from The MagPi issue 59. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

“Hey, world!” Estefannie exclaims, a wide grin across her face as the camera begins to roll for another YouTube tutorial video. With a growing number of followers and wonderful support from her fans, Estefannie is building a solid reputation as an online maker, creating unique, fun content accessible to all.

It’s as if she was born into performing and making for an audience, but this fun, enjoyable journey to social media stardom came not from a desire to be in front of the camera, but rather as a unique approach to her own learning. While studying, Estefannie decided the best way to confirm her knowledge of a subject was to create an educational video explaining it. If she could teach a topic successfully, she knew she’d retained the information. And so her YouTube channel, Estefannie Explains It All, came into being.

Her first videos featured pages of notes with voice-over explanations of data structure and algorithm analysis. Then she moved in front of the camera, and expanded her skills in the process.

But YouTube isn’t her only outlet. With nearly 50000 followers, Estefannie’s Instagram game is strong, adding to an increasing number of female coders taking to the platform. Across her Instagram grid, you’ll find insights into her daily routine, from programming on location for work to behind-the-scenes troubleshooting as she begins to create another tutorial video. It’s hard work, with content creation for both Instagram and YouTube forever on her mind as she continues to work and progress successfully as a software engineer.

As a thank you to her Instagram fans for helping her reach 10000 followers, Estefannie created a free game for Android and iOS called Gravitris — imagine Tetris with balance issues!

Estefannie was born and raised in Mexico, with ambitions to become a graphic designer and animator. However, a documentary on coding at Pixar, and the beauty of Merida’s hair in Brave, opened her mind to the opportunities of software engineering in animation. She altered her career path, moved to the United States, and switched to a Computer Science course.

With a constant desire to make and to learn, Estefannie combines her software engineering profession with her hobby to create fun, exciting content for YouTube.

While studying, Estefannie started a Computer Science Girls Club at the University of Houston, Texas, and she found herself eager to put more time and effort into the movement to increase the percentage of women in the industry. The club was a success, and still is to this day. While Estefannie has handed over the reins, she’s still very involved in the cause.

Through her YouTube videos, Estefannie continues the theme of inclusion, with every project offering a warm sense of approachability for all, regardless of age, gender, or skill. From exploring Scratch and Makey Makey with her young niece and nephew to creating her own Disney ‘Made with Magic’ backpack for a trip to Disney World, Florida, Estefannie’s videos are essentially a documentary of her own learning process, produced so viewers can learn with her — and learn from her mistakes — to create their own tech wonders.

Estefannie’s automated gingerbread house project was a labour of love, with electronics, wires, and candy strewn across both her living room and kitchen for weeks before completion. While she already was a skilled programmer, the world of physical digital making was still fairly new for Estefannie. Having ditched her hot glue gun in favour of a soldering iron in a previous video, she continued to experiment and try out new, interesting techniques that are now second nature to many members of the maker community. With the gingerbread house, Estefannie was able to research and apply techniques such as light controls, servos, and app making, although the latter was already firmly within her skill set. The result? A fun video of ups and downs that resulted in a wonderful, festive treat. She even gave her holiday home its own solar panel!

And that’s just the beginning of her adventures with Pi…but we won’t spoil her future plans by telling you what’s coming next. Sorry! However, since this article was written last year, Estefannie has released a few more Pi-based project videos, plus some awesome interviews and live-streams with other members of the maker community such as Simone Giertz. She even made us an awesome video for our Raspberry Pi YouTube channel! So be sure to check out her latest releases.

2,264 Likes, 56 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with…”

While many wonderful maker videos show off a project without much explanation, or expect a certain level of skill from viewers hoping to recreate the project, Estefannie’s videos exist almost within their own category. We can’t wait to see where Estefannie Explains It All goes next!

Wherever Google has a presence, rightsholders are around to accuse the search giant of not doing enough to deal with piracy.

Over the past several years, the company has been attacked by both the music and movie industries but despite overtures from Google, criticism still floods in.

In Australia, things are definitely heating up. Village Roadshow, one of the nation’s foremost movie companies, has been an extremely vocal Google critic since 2015 but now its co-chief, the outspoken Graham Burke, seems to want to take things to the next level.

As part of yet another broadside against Google, Burke has for the second time in a month accused Google of playing a large part in online digital crime.

“My view is they are complicit and they are facilitating crime,” Burke said, adding that if Google wants to sue him over his comments, they’re very welcome to do so.

It’s highly unlikely that Google will take the bait. Burke’s attempt at pushing the issue further into the spotlight will have been spotted a mile off but in any event, legal battles with Google aren’t really something that Burke wants to get involved in.

Australia is currently in the midst of a consultation process for the Copyright Amendment (Service Providers) Bill 2017 which would extend the country’s safe harbor provisions to a broader range of service providers including educational institutions, libraries, archives, key cultural institutions and organizations assisting people with disabilities.

For its part, Village Roadshow is extremely concerned that these provisions may be extended to other providers – specifically Google – who might then use expanded safe harbor to deflect more liability in respect of piracy.

“Village Roadshow….urges that there be no further amendments to safe harbor and in particular there is no advantage to Australia in extending safe harbor to Google,” Burke wrote in his company’s recent submission to the government.

“It is very unlikely given their size and power that as content owners we would ever sue them but if we don’t have that right then we stand naked. Most importantly if Google do the right thing by Australia on the question of piracy then there will be no issues. However, they are very far from this position and demonstrably are facilitating crime.”

Accusations of crime facilitation are nothing new for Google, with rightsholders in the US and Europe having accused the company of the same a number of times over the years. In response, Google always insists that it abides by relevant laws and actually goes much further in tackling piracy than legislation currently requires.

On the safe harbor front, Google begins by saying that not expanding provisions to service providers will have a seriously detrimental effect on business development in the region.

“[Excluding] online service providers falls far short of a balanced, pro-innovation environment for Australia. Further, it takes Australia out of step with other digital economies by creating regulatory uncertainty for [venture capital] investment and startup/entrepreneurial success,” Google’s submission reads.

“Under the new scheme, Australian-based startups and service providers, unlike their international counterparts, will not receive clear and consistent legal protection when they respond to complaints from rightsholders about alleged instances of online infringement by third-party users on their services,” Google notes.

Interestingly, Google then delivers what appears to be a loosely veiled threat.

One of the key anti-piracy strategies touted by the mainstream entertainment companies is collaboration between rightsholders and service providers, including the latter providing voluntary tools to police infringement online. Google says that if service providers are given a raw deal on safe harbor, the extent of future cooperation may be at risk.

“If Australian-based service providers are carved out of the new safe harbor regime post-reform, they will operate from a lower incentive to build and test new voluntary tools to combat online piracy, potentially reducing their contributions to innovation in best practices in both Australia and international markets,” the company warns.

But while Village Roadshow argue against safe harbors and warn that piracy could kill the movie industry, it is quietly optimistic that the tide is turning.

In a presentation to investors last week, the company said that reducing piracy would have “only an upside” for its business but also added that new research indicates that “piracy growth [is] getting arrested.” As a result, the company says that it will build on the notion that “74% of people see piracy as ‘wrong/theft’” and will call on Australians to do the right thing.

In the meantime, the pressure on Google will continue but lawsuits – in either direction – won’t provide an answer.

Unfortunately, it seems to have devolved into mostly a get-rich-quick scheme for nerds, and by nearly any measure it’s turning into a spectacular catastrophe. Its “success” is measured in how much a bitcoin is worth in US dollars, which is pretty close to an admission from its own investors that its only value is in converting back to “real” money — all while that same “success” is making it less useful as a distinct currency.

Blah, blah, everyone already knows this.

What concerns me slightly more is the gold rush hype cycle, which is putting cryptocurrency and “blockchain” in the news and lending it all legitimacy. People have raked in millions of dollars on ICOs of novel coins I’ve never heard mentioned again. (Note: again, that value is measured in dollars.) Most likely, none of the investors will see any return whatsoever on that money. They can’t, really, unless a coin actually takes off as a currency, and that seems at odds with speculative investing since everyone either wants to hoard or ditch their coins. When the coins have no value themselves, the money can only come from other investors, and eventually the hype winds down and you run out of other investors.

I fear this will hurt a lot of people before it’s over, so I’d like for it to be over as soon as possible.

That said, the hype itself has gotten way out of hand too. First it was the obsession with “blockchain” like it’s a revolutionary technology, but hey, Git is a fucking blockchain. The novel part is the way it handles distributed consensus (which in Git is basically left for you to figure out), and that’s uniquely important to currency because you want to be pretty sure that money doesn’t get duplicated or lost when moved around.

But now we have startups trying to use blockchains for website backends and file storage and who knows what else? Why? What advantage does this have? When you say “blockchain”, I hear “single Git repository” — so when you say “email on the blockchain”, I have an aneurysm.

Bitcoin seems to have sparked imagination in large part because it’s decentralized, but I’d argue it’s actually a pretty bad example of a decentralized network, since people keep forking it. The ability to fork is a feature, sure, but the trouble here is that the Bitcoin family has no notion of federation — there is one canonical Bitcoin ledger and it has no notion of communication with any other. That’s what you want for currency, not necessarily other applications. (Bitcoin also incentivizesfrivolous forking by giving the creator an initial pile of coins to keep and sell.)

And federation is much more interesting than decentralization! Federation gives us email and the web. Federation means I can set up my own instance with my own rules and still be able to meaningfully communicate with the rest of the network. Federation has some amount of tolerance for changes to the protocol, so such changes are more flexible and rely more heavily on consensus.

Federation is fantastic, and it feels like a massive tragedy that this rekindled interest in decentralization is mostly focused on peer-to-peer networks, which do little to address our current problems with centralized platforms.

Again, the tech is cool and all, but the marketing hype is getting way out of hand.

Maybe what I really want from 2018 is less marketing?

For one, I’ve seen a huge uptick in uncritically referring to any software that creates or classifies creative work as “AI”. Can we… can we not. It’s not AI. Yes, yes, nerds, I don’t care about the hair-splitting about the nature of intelligence — you know that when we hear “AI” we think of a human-like self-aware intelligence. But we’re applying it to stuff like a weird dog generator. Or to whatever neural network a website threw into production this week.

And this is dangerously misleading — we already had massive tech companies scapegoating The Algorithm™ for the poor behavior of their software, and now we’re talking about those algorithms as though they were self-aware, untouchable, untameable, unknowable entities of pure chaos whose decisions we are arbitrarily bound to. Ancient, powerful gods who exist just outside human comprehension or law.

It’s weird to see this stuff appear in consumer products so quickly, too. It feels quick, anyway. The latest iPhone can unlock via facial recognition, right? I’m sure a lot of effort was put into ensuring that the same person’s face would always be recognized… but how confident are we that other faces won’t be recognized? I admit I don’t follow all this super closely, so I may be imagining a non-problem, but I do know that humans are remarkably bad at checking for negative cases.

Hell, take the recurring problem of major platforms like Twitter and YouTube classifying anything mentioning “bisexual” as pornographic — because the word is also used as a porn genre, and someone threw a list of porn terms into a filter without thinking too hard about it. That’s just a word list, a fairly simple thing that any human can review; but suddenly we’re confident in opaque networks of inferred details?

I don’t know. “Traditional” classification and generation are much more comforting, since they’re a set of fairly abstract rules that can be examined and followed. Machine learning, as I understand it, is less about rules and much more about pattern-matching; it’s built out of the fingerprints of the stuff it’s trained on. Surely that’s just begging for tons of edge cases. They’re practically made of edge cases.

I’m reminded of a point I saw made a few days ago on Twitter, something I’d never thought about but should have. TurnItIn is a service for universities that checks whether students’ papers match any others, in order to detect cheating. But this is a paid service, one that fundamentally hinges on its corpus: a large collection of existing student papers. So students pay money to attend school, where they’re required to let their work be given to a third-party company, which then profits off of it? What kind of a goofy business model is this?

And my thoughts turn to machine learning, which is fundamentally different from an algorithm you can simply copy from a paper, because it’s all about the training data. And to get good results, you need a lot of training data. Where is that all coming from? How many for-profit companies are setting a neural network loose on the web — on millions of people’s work — and then turning around and selling the result as a product?

This is really a question of how intellectual property works in the internet era, and it continues our proud decades-long tradition of just kinda doing whatever we want without thinking about it too much. Nothing if not consistent.

A bit tougher, since computers are pretty alright now and everything continues to chug along. Maybe we should just quit while we’re ahead. There’s some real pie-in-the-sky stuff that would be nice, but it certainly won’t happen within a year, and may never happen except in some horrific Algorithmic™ form designed by people that don’t know anything about the problem space and only works 60% of the time but is treated as though it were bulletproof.

The giants are getting more giant. Maybe too giant? Granted, it could be much worse than Google and Amazon — it could be Apple!

Amazon has its own delivery service and brick-and-mortar stores now, as well as providing the plumbing for vast amounts of the web. They’re not doing anything particularly outrageous, but they kind of loom.

Ad company Google just put ad blocking in its majority-share browser — albeit for the ambiguously-noble goal of only blocking obnoxious ads so that people will be less inclined to install a blanket ad blocker.

Twitter is kind of a nightmare but no one wants to leave. I keep trying to use Mastodon as well, but I always forget about it after a day, whoops.

Facebook sounds like a total nightmare but no one wants to leave that either, because normies don’t use anything else, which is itself direly concerning.

IRC is rapidly bleeding mindshare to Slack and Discord, both of which are far better at the things IRC sadly never tried to do and absolutely terrible at the exact things IRC excels at.

The problem is the same as ever: there’s no incentive to interoperate. There’s no fundamental technical reason why Twitter and Tumblr and MySpace and Facebook can’t intermingle their posts; they just don’t, because why would they bother? It’s extra work that makes it easier for people to not use your ecosystem.

I don’t know what can be done about that, except that hope for a really big player to decide to play nice out of the kindness of their heart. The really big federated success stories — say, the web — mostly won out because they came along first. At this point, how does a federated social network take over? I don’t know.

I… don’t really have a solid grasp on what’s happening in tech socially at the moment. I’ve drifted a bit away from the industry part, which is where that all tends to come up. I have the vague sense that things are improving, but that might just be because the Rust community is the one I hear the most about, and it puts a lot of effort into being inclusive and welcoming.

So… more projects should be like Rust? Do whatever Rust is doing? And not so much what Linus is doing.

I haven’t heard this brought up much lately, but it would still be nice to see. The Bay Area runs on open source and is raking in zillions of dollars on its back; pump some of that cash back into the ecosystem, somehow.

I’ve seen a couple open source projects on Patreon, which is fantastic, but feels like a very small solution given how much money is flowing through the commercial tech industry.

One might wonder where the money to host a website comes from, then? I don’t know. Maybe we should loop this in with the above thing and find a more informal way to pay people for the stuff they make when we find it useful, without the financial and cognitive overhead of A Transaction or Giving Someone My Damn Credit Card Number. You know, something like Bitco— ah, fuck.

Last fall, Epic Games released Fortnite’s free-to-play “Battle Royale” game mode for the PC and other platforms, generating massive interest among gamers.

This also included thousands of cheaters, many of whom were subsequently banned. Epic Games then went a step further by taking several cheaters to court for copyright infringement.

In the months that have passed several cases have been settled with undisclosed terms, but it appears that not all defendants are easy to track down. In at least two cases, Epic had to retain the services of private investigators to locate their targets.

In a case filed in North Carolina, the games company was unable to serve the defendant (now identified as B.B) so they called in the help of Klatt Investigations, with success.

“[A]fter having previously engaged two other process servers that were unable to locate and successfully serve B.B., Epic engaged Klatt Investigations, a Canadian firm that provides various services related to the private service of process in civil matters.

“In this case, we engaged Klatt Investigations to locate and effect service of process by personal service on Defendant,” Epic informs the court.

As Epic Games didn’t know the age of the defendant beforehand they chose to approach the person as a minor, which turned out to be a wise choice. The alleged cheater indeed appears to be a minor, so both the Defendant and Defendant’s mother were served.

Based on this new information, Epic Games asked the court to redact any court documents that reveal personal information of the defendant, which includes his or her full name.

Epic’s request to seal

This is not the first time Epic Games has used a private investigator to locate a defendant. It hired S&H Investigative Services in another widely reported case, where the defendant also turned out to be a minor.

Tags

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.