Wednesday, December 30, 2015

Many video games are power fantasies, and most that involve warfare depict the glory of combat and put the player in the lead role. Not so in The Witcher 3. Geralt has his own motivations, and he does his best to avoid politics and the larger conflict between Nilfgaard and the Northern Kingdoms.

And CD Projekt never depicts war as glorious or fun. Soldiers describe combat as a lot of boredom and waiting punctuated by moments of frenzied madness. The Northern War of the The Witcher 3 is all about waiting, survival and boredom.

“As an open-source platform, Android is built upon the collaboration of the open-source community,” a Google spokesperson told VentureBeat. “In our upcoming release of Android, we plan to move Android’s Java language libraries to an OpenJDK-based approach, creating a common code base for developers to build apps and services. Google has long worked with and contributed to the OpenJDK community, and we look forward to making even more contributions to the OpenJDK project in the future.”

I haven't been paying a lot of attention to the case this fall, and I haven't seen a lot of coverage, either, so these random speculations intrigued me, though I have no idea what they mean.

Tuesday, December 29, 2015

Guinea was declared free of Ebola transmission on Tuesday after more than 2,500 people died from the virus in the West African nation, leaving Liberia as the only country still counting down the days until the end of the epidemic.

The announcement comes 42 days after the last person confirmed with Ebola tested negative for a second time. The country now enters a 90-day period of heightened surveillance, the U.N. World Health Organization said.

Saturday, December 26, 2015

I've been keeping my eye on Pillars of Eternity for several months now, but hadn't yet taken the plunge.

Then, over the holiday break, it went on deeply-discounted sale on Steam.

So I made the decision.

And, wow, is this a great game!

It's everything the reviews said it was.

And the real-time nature, so far, hasn't been much of a problem. The first thing I did was to find the Options, and in there find the Auto-Pause options, and there was a conveniently-labelled checkbox: Set All.

So I checked it, and it set all, and so far I've spent 10 hours exploring this new world.

Wednesday, December 23, 2015

As it turns out calves have a simple algorithm for navigation – If my head fits through it, I can go. The problem is that their shoulders are wider than their head and their hips are even wider yet. In fact, this behavior is one we take advantage of. We catch cows in a “head gate” when we need to work with them. The cow walks down a chute, sees a gap in the head gate and tries to go through it. The edges of the gate catch their shoulders and close, locking the cow in.

Health care researchers who have seen the new findings say they are likely to force a rethinking of some conventional wisdom about health care. In particular, they cast doubt on the wisdom of encouraging mergers among hospitals, as parts of the 2010 health care law did.

Larger, integrated hospital systems – like those in Grand Junction – can often spend less money in Medicare, by avoiding duplicative treatments. But those systems also tend to set higher prices in private markets, because they face relatively little local competition.

The article goes on to note that:

Below, a scatterplot showing medical spending per person for Medicare and private insurance for all 306 hospital referral regions in the United States.

The chart looks random, and that’s the point: There is no real relationship between spending in one system and the other.

The answers aren't easy, but the article gives lots of suggestions for further investigation, and for further thought.

And big thanks to The New York Times for continuing to chip away at this complicated yet crucial puzzle.

Tuesday, December 22, 2015

Alas, while Juniper used Dual_EC_DRBG with the P-256 NIST curve and the point P specified in SP 800-90A in ScreenOS — the operating system running on NetScreen VPN gateways — they chose to use a different point Q and not the one supplied in the standard for P-256.

...

However, apparently starting in August 2012 (release date according to release notes for 6.3.0r12), Juniper started shipping ScreenOS firmware images with a different point Q. Adam Caucill first noted this difference after HD Moore posted a diff of strings found in the SSG 500 6.2.0r14 and the 6.2.0r15 firmware. As we can deduce from their recent security advisory and the fact that they reverted back to the old value Q in the patched images, this was a change not authored by them.

The creepiest thing about CVE-2015-7756 is that there doesn't seem to be any unauthorized code. Indeed, what's changed in the modified versions is simply the value of the Q point. According to Ralf this point changed in 2012, presumably to a value that the hacker(s) generated themselves. This would likely have allowed them to passively decrypt and ScreenOS VPN sessions they were able to eavesdrop.

People assumed that the NSA wanted a backdoored random number generator so they could look at other people's traffic, but of course a plausible answer is that a backdoored random number generator is even more useful for looking at your own traffic in an economical way.

The argument to the strcmp call is (...), which is the backdoor password, and was presumably chosen so that it would be mistaken for one of the many other debug format strings in the code. This password allows an attacker to bypass authentication through SSH and Telnet. If you want to test this issue by hand, telnet or ssh to a Netscreen device, specify any username, and the backdoor password. If the device is vulnerable, you should receive an interactive shell with the highest privileges.

We are detecting numerous login attempts against our ssh honeypots using the ScreenOS backdoor password. Our honeypot doesn't emulate ScreenOS beyond the login banner, so we do not know what the attackers are up to, but some of the attacks appear to be "manual" in that we do see the attacker trying different commands.

Monday, December 21, 2015

One of my favorite parts is the section where the author discusses Babai's patience and perseverance:

Babai’s proposed algorithm doesn’t bring graph isomorphism all the way into P, but it comes close. It is quasi-polynomial, he asserts, which means that for a graph with n nodes, the algorithm’s running time is comparable to n raised not to a constant power (as in a polynomial) but to a power that grows very slowly.

The previous best algorithm — which Babai was also involved in creating in 1983 with Eugene Luks, now a professor emeritus at the University of Oregon — ran in “subexponential” time, a running time whose distance from quasi-polynomial time is nearly as big as the gulf between exponential time and polynomial time. Babai, who started working on graph isomorphism in 1977, “has been chipping away at this problem for about 40 years,” Aaronson said.

Babai's paper is up on Arxiv: Graph Isomorphism in Quasipolynomial Time. Although this is definitely not for the casual student, it's a remarkably clear paper, systematically working its way through the problem in detail.

As Babai notes near the end of the paper, in this area of Computer Science, theory and practice have taken different paths:

The purpose of the present paper is to give a guaranteed upper bound (worst-case analysis); it does not contribute to practical solutions. It seems, for all practical purposes, the Graph Isomorphism problem is solved; a suite of remarkably efficient programs is available (nauty, saucy, Bliss, conauto, Traces). The article by McKay and Piperno [McP] gives a detailed comparison of methods and performance. Piperno’s article [Pi] gives a detailed description of Traces, possibly the most successful program for large, difficult graphs.

A much more interesting case happened when a merge that was clearly a conflict in libgit2 was being merged successfuly by Git. After some debugging, we found that the merge that Git was generating was broken — the single file that was being merged was definitely not a valid merge, and it even included conflict markers in the output!

It took a bit more digging to find out the reason why Git was "successfully" merging this file. We noticed that the file in question happened to have exactly 768 conflicts between the old and the new version. This is a very peculiar number. The man page for git-merge-one-file confirmed our suspicions:

The exit value of this program is negative on error, and the number of conflicts otherwise. If the merge was clean, the exit value is 0.

Given that shells only use the lowest 8 bits of a program's exit code, it's obvious why Git could merge this file: the 768 conflicts were being reported as 0 by the shell, because 768 is a multiple of 256!

One of the things about rewriting an algorithm is that you have to understand whether a difference in behavior represents a bug in your new code (which is the vastly more likely case), or is actually a discovery of a bug that existed in the prior implementation, but was hitherto unknown.

Big kudos for the libgit2 team for working their way through this one!

This time we will pick only one winner who will receive a one-of-a-kind custom made Witcher sword! Check out the photo of the blade below. This thing was forged by Hattori himself (seriously)! Please check if we can send the prize to your country -- there might be some legal restrictions regarding sending sharp objects!

Saturday, December 12, 2015

What all of this posturing about bad hosts is meant to obscure is that the exact kind of hosting that Airbnb does want, what it envisions as the core of the service—thousands and thousands and thousands of regular people sharing their homes whenever they’re not home—is largely illegal in New York. Like actually against the law, up-to-$5000-fine kind of illegal! While it’s not uncommon for startups looking to “disrupt” something to begin operating in liminal legal spaces, the laws in New York surrounding short-term rentals are rather unambiguous, despite Airbnb’s protests that there is a “lack of clarity” around the rules. It’s illegal for a person living in an apartment in a “multiple-dwelling”—essentially, a building with more than two apartments—to rent out his or her entire home for a period of less than thirty days if he or she is not present. (It’s perfectly legal in single- and two-family homes, which aren’t “multiple dwellings.”) So, a whole-home listing in a building with more than two apartments that doesn’t have a minimum stay of thirty days—and isn’t a hotel or boarding house or some such—is probably illegal.

Even though Uber is almost the same size and scale as Facebook was back then, it's in a substantially better situation. The company has been careful to learn from Facebook's missteps and control who owns its stock so it isn't forced into an IPO.

All 400,000-plus drivers cannot receive any ride requests until they accept the agreement, which lays out a lengthy provision requiring mandatory arbitration starting on page 15 and flagged on the first page. While it includes a way to opt out, many drivers may not understand that or may fear retaliation for doing so.

A recent investigation into the general arbitration system by The New York Times found that private arbitration is subject to little oversight, rarely can be appealed, does not have clear evidence rules, and is subject to rampant conflict of interest that tilts towards corporations.

Uber says the new driver agreement was necessary because on Wednesday U.S. District Judge Edward Chen ruled that part of the agreement Uber drivers had been signing was not enforceable, rendering the entire agreement unenforceable.

In order to correct that, Uber rewrote the agreement and removed a requirement that arbitration be confidential. The company informed Chen of the new agreement on Thursday and pushed it out to drivers Friday, the San Francisco-based company said.

Uber brought in Rachel Whetstone, a top Google policy and communications executive, to lead Uber’s overall policy and communications. Ms. Whetstone hired Jill Hazelbaker, an executive at Snapchat and a former colleague of Ms. Whetstone at Google, where Ms. Hazelbaker also ran policy and communications teams.

It appears, insiders say, that the company is consolidating its communications and policy operation under its new leadership from Google.

Due to an (unexpected, involuntary) change of employment about 6 years ago, I found myself with a 401K account that I was required to rollover into an IRA, and so I became a more active participant in my retirement planning.

Anyone who, like me, got started in trying to manage his own retirement account at the start of 2010 surely thinks of
himself as the greatest investment analyst in the history of the world, when the actual fact is that I simply got in
on the best 5 years (2010-2014) that the market ever had, or ever is likely to have.

During that time, I bought and held a small number of Amazon shares. I also got lucky enough to buy Netflix stock when it took a BIG
dip, and then held some of those shares through its 7-way split. And I lucked into buying Hawaiian Airlines at a time when all the airline stocks were in the tank due to the Great Recession.

So pretty much all my gains during those 5 years were in those three stocks, and they were all lucky choices to which I committed relatively small investments.

In each case, when they started to race up I sold some of my shares, enough to cover my initial investment, and let the rest ride.

I'm a HUGE fan of what they call "dollar cost averaging", which is basically: don't invest all at once; don't sell all at once. Instead, invest gradually over a period of time, splitting your purchase into separate smaller purchases, or splitting your sale into separate smaller sales. That way, if you have the bad luck to pick a bad day for one of your orders, the probabilities are that the luck will even out on the other ones.

And, I ALWAYS ALWAYS ALWAYS use limit orders, for both buying and selling, never market orders. For buying, I pick a target price which is slightly (1-3%) lower than the current price, and let the computers monitor it and execute on a price dip. And for selling I do the same thing.

So the computers do the hard work.

Since my IRA company has a per-trade commission, I need to be careful not to execute too many orders. But in practice I only issue about 1-2 orders a month, so I'm not spending much on commissions.

Even though Amazon and Netflix are incredibly volatile and incredibly pricey, they are also extremely-well-run companies with huge potential ahead of them. I think this is also true of Microsoft, Intel, and Google.

Tesla is also a fascinating company, but I don't own it.

The majority of my portfolio is, and has been, in "consumer staples": I own Clorox, Johnson & Johnson, Procter & Gamble, Church & Dwight, Campbell's, Kimberly Clark, VF Corp, Wal-Mart, etc. Over almost any period of time (1 year, 5 years, 25 years, 70 years), these companies have been steadily growing at a slow rate.

AND, they pay dividends!

Microsoft and Intel also pay dividends. I like companies that pay dividends. Amazon and Netflix stand out as exceptions; pretty much all my other investments are in dividend-paying companies.

I have some other investments in other companies, but much of that is a motley mess.

I don't spend a lot of time on my portfolio. I tend to check it once a week or so. It's really important not to watch my portfolio, because many of my holdings are quite volatile and my account routinely goes down and up by significant amounts in a single day, so if I watched it every day I'd die from the emotional roller-coaster of it.

So I just let the various dividends accumulate in my "cash bucket" in my account, and, every so often, enough has accumulated that I re-invest those dividends by buying some more of one of the stocks I already own (or, VERY rarely, initiating a position in a new company that I like). The re-investment decision is probably really important, but I don't spend much time on it; I just pick a company that I "want" to own some more of, and enter a buy order for the cash that I've got available to invest.

So I stay mostly fully invested.

I have about 12% of my portfolio in bonds, which has been a complete disaster, since the last 7 years have been a disaster for bonds. Now, since I'm just talking about my IRA here, and I also have a 401K at my company which is 100% invested in stock funds, the actual overall percentage of my retirement savings is even more tilted to stocks than bonds, so I'm really only holding like 8% of my portfolio in bonds.

But then, I'm still (relatively) young, and am hoping to go at least 10 more years before I start to draw on this money, so staying fully in stocks for now is a good strategy. When I get to my mid-60's I'll probably start moving some of that money over to bonds, though I still love those consumer staples and their dividend rate remains better than any bond fund I've seen out there.

I really don't do much with stock screeners, research, etc., other than reading the business pages once in a while. Every so often I hear about a company that I'd like to own, and then I expand my portfolio, but really I have enough separate investments right now that I don't really want any more things to look at.

In fact, basically every company or mutual fund that I bought after spending lots of time doing screens and research was a failure for me. I did much better just picking companies I like and investing in them gradually over time.

Pretty much everything Barry Ritholtz has on his site is superb and I read that site (and the things he links to) regularly.

I am a long-term investor: the bulk of my new investment is in my 401K at my company, which unfortunately is a very poorly run investment operation (small fund selection, high fund fees, high administration fees), but it is highly tax-advantaged so that's what I'm doing. And my company does a small regular contribution to my plan. So, overall, the bulk of my new investments are going there, and my IRA is just sitting there gathering dividends and slowly increasing its holdings of my existing companies.

I'm hoping that, in 10-12 years when I retire, I'll have enough. But I'm not really sure that I will.

But I don't think there's much I can really do about that worry, other than to keep on with the current plan.

My back-of-the-envelope math says that my continued 401K contributions, combined with what I've already saved, and the paltry amount of Social Security that I'll qualify for when I turn 70, will be adequate.

So I try to mostly not think about this stuff, because it's depressing and I have many other things to do.

But somebody asked, so I replied.

And I figured I'd put it on my blog, because maybe somebody will tell me that I've totally overlooked something.

Wednesday, December 9, 2015

A classic November-December setup featuring a powerful jet stream stretching from eastern Asia across the Pacific for 5,000 miles to the Pacific Northwest is acting as the conductor for this storm parade. The persistent pipeline of moisture is being supplied by what meteorologists sometimes refer to as an atmospheric river. In this case, the plume of moisture impacting the Northwest extends all the way from the western Pacific Ocean near the Philippines.

Readings in Database Systems (commonly known as the "Red Book") has offered readers an opinionated take on both classic and cutting-edge research in the field of data management since 1988. Here, we present the Fifth Edition of the Red Book — the first in over ten years.

I had the original edition, got it in 1989 if memory serves. Chewed it to death.

fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent storage device) so that all changed information can be retrieved even after the system crashed or was rebooted.

Hypervisors outperforming the host machine is the most interesting to me. The results of this test clearly show that the hypervisors must be lying about synced writes for performance. This corroborates what I’ve seen with Packer as well, where if the virtual machine is not cleanly shut down, committed writes are lost. fsync() in a virtual machine does not mean that the data was written on the host, only that is is committed within the hypervisor.

ESX(i) does not cache guest OS writes. This gives a VM the same crash consistency as a physical machine: i.e. a write that was issued by the guest OS and acknowledged as successful by the hypervisor is guaranteed to be on disk at the time of acknowledgement. In other words, there is no write cache on ESX to talk about, and so disabling it is moot. So that’s one thing out of our way.

Is this more than a "he says, she says" thing? That is, is there a more definitive resolution of this question somewhere?

By durable, I mean that fsync() should actually commit writes to physical stable storage, not just the disk write cache when that is enabled. Databases and guest VMs needs this, or an equivalent feature, if they aren't to face occasional corruption after power failure and perhaps some crashes.

I'm sure there are places where this information is definitively and clearly documented.

Sunday, December 6, 2015

Around the end of July, the Inter-webs were full of people discussing their experiences.

Windows 10 wasn't ready for my computer.

In late September, I fussed with it some.

I dug into the Windows Update history screens, and there were strange event codes and unfamiliar messages.

Researching them with various search engines lead to lots of people with similar strange messages, and odd suggestions to do things like "clear your download directory, maybe you had a corrupted download."

I tried a few of those strange suggestions, but basically forgot about things, and determined that the most probable outcome was that this computer was only going to run Windows 8.1 (which is fine), and maybe in the future I might try running Windows 10.

Then, Friday, the computer seemed to wake up, and announced that it thought that Windows 10 was "coming".

And, today, it asked me if I wanted to start the download.

So, we'll see.

Maybe Windows 10 is in my future.

Maybe not.

And (is it an omen), just as I go to try to post this, Blogger is down.

Friday, December 4, 2015

There is something about The Witcher that is so eerie, spooky and dramatic. I can’t quite put my finger on it. Whether it’s playing as a Van Helsing-esque mutant, or the monsters themselves that you are hired to hunt. Whatever the driving force might be, there is something truly special about CD Projekt Red’s Witcher 3. Again, as far as games go, I had a fair many gripes with the title, but racking up a total play time of over 400 hours, it’s safe to say I got my money’s worth. The other thing is that The Witcher wasn’t, like Fallout, merely substance. The Witcher was art. The beauty, the immersion, it was all so well done that it felt less like a game than a film. It was truly an amazing experience. Fallout 4 is a fantastic game, and an unusually charming one, for the amount of bugs that players face each time they boot it up.

The key ideas of the algorithm for GI are really classic ones from design of algorithms. The genius is getting them all to work together. The ideas break into two types: those that are general methods from computer science and those that are special to the GI problem.

Real-Time Strategy (RTS) games is a sub-genre of strategy games where players need to build an economy (gathering resources and building a base) and military power (training units and researching technologies) in order to defeat their opponents (destroying their army and base). Artificial Intelligence problems related to RTS games deal with the behavior of an artificial player. This consists among others to learn how to play, to have an understanding about the game and its environment, to predict and infer game situations from a context and sparse information.

In computing systems built on such huge scales, even low-probability failures take place relatively frequently. If an individual computer can be expected to crash, say, three times a year, in a data center with 10,000 computers, there will be nearly 100 crashes a day.

Our group at the University of Toronto has been investigating ways to prevent that. We started with the simple premise that before we could hope to make these computers work more reliably, we needed to fully understand how real systems fail. While it didn’t surprise us that DRAM errors are a big part of the problem, exactly how those memory chips were malfunctioning proved a great surprise.

Modern NUMA systems are quite different from the old ones, so we must revisit our assumptions about them and rethink how to build NUMA-aware operating systems. This article evaluates performance characteristics of a representative modern NUMA system, describes NUMA-specific features in Linux, and presents a memory-management algorithm that delivers substantially reduced memory-access times and better performance.

Of course, it’s possible that the software on those cars could have been updated back at dealerships in the intervening years – but that wouldn’t address all of the issues in the paper, and evidence suggests plenty of vulnerabilities still exist.

Over a range of experiments, both in the lab and in road tests, we demonstrate the ability to adversarially control a wide range of automotive functions and completely ignore driver input — including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on.

In this paper we examine a popular aftermarket telematics control unit (TCU) which connects to a vehicle via the standard OBD-II port. We show that these devices can be discovered, targeted, and compromised by a remote attacker and we demonstrate that such a compromise allows arbitrary remote control of the vehicle.

How Change Happens draws on many first-hand examples from the global experience of Oxfam, one of the world’s largest social justice NGOs, as well as Duncan Green’s 35 years of studying and working on international development issues. It tests ideas and sets out the latest thinking on what works to achieve progressive change.

The implementation weaknesses described in this white paper are common to most organizations, and point to limitations in traditional modeling of and response to threats to computer security. Most of the problems occur due to ranking risk inappropriately, poor communications, and uncoordinated, slow, ineffectual responses.

Our results shown that more than a decade and a half after Why Johnny Can’t Encrypt, modern PGP tools are still unusable for the masses. We finish with a discussion of pain points encountered using Mailvelope, and discuss what might be done to address them in future PGP systems.

Successful deployment of a messaging system requires background information that is not easily available; most of what we know, we had to learn in the school of hard knocks. To save others a knock or two, we have collected here the essential background information and commentary on some of the issues involved in successful deployments.

Currently , these key-value stores use either LRU or an LRU approximation as the replacement policy for choosing a key-value pair to be evicted from the store. However, if the cost of recomputing cached values varies a lot, like in the RUBiS and TPC-W benchmarks, then none of these replacement policies are the best choice. Instead, it can be advantageous to take the cost of recomputation into consideration.

Apache Kafka has a data structure called the "request purgatory". The purgatory holds any request that hasn't yet met its criteria to succeed but also hasn't yet resulted in an error. The problem is “How can we efficiently keep track of tens of thousands of requests that are being asynchronously satisfied by other activity in the cluster?”

We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies.

In this blog post, we will briefly show the similarities and differences between Paxos and Raft. Firstly, we will describe what a consensus algorithm is. Secondly, we will describe how to build a replication solution using instances of a consensus algorithm. Then we will describe how leaders are elected in both algorithms and some safety and liveness properties.

A customer reported an unusual problem with our CloudFlare CDN: our servers were responding to some HTTP requests slowly. Extremely slowly. 30 seconds slowly. This happened very rarely and wasn't easily reproducible. To make things worse all our usual monitoring hadn't caught the problem. At the application layer everything was fine: our NGINX servers were not reporting any long running requests.

If you’re coming into this relatively new, or even if you need a little brush-up, let me state: Steve Meretzky has earned the title of “Game God” several times over, having been at the center of the early zenith of computer games in the 1980s and persisting, even thriving, in the years since. He continues to work in the industry, still doing game design, 35 years since he started out as a tester at what would become Infocom.

But more than that – besides writing a large amount of game classics in the Interactive Fiction realm, he also was an incredibly good historian and archivist, saving everything.

Ubisoft’s latest tactical shooter, Tom Clancy’s Rainbow Six Siege, adopts a striking bent towards a unique brand of pseudo-realism. Siege evokes a perverse version of the uncanny valley. It mixes the over-the-top, arcade-style renditions of violence games often lean towards with the gut-wrenching reality that we are, in fact, remarkably fragile.

Most ball tracking systems rely on two different approaches. The first looks to follow the movement of the ball in three dimensions and then predicts various likely trajectories in the future. This “tree” of possible trajectories can then be pruned as more ball-tracking data becomes available.

The advantage of this approach is that the laws of physics are built in to the trajectory predictions so unphysical solutions can be avoided. However, it is hugely sensitive to the quality of the ball tracking data and so tends to fail when the ball is occluded or when players interact with the ball in unpredictable ways.

Another method is to track the players and note when they are in possession of the ball. The movement of the ball is then assumed to follow the player and when possession transfers from one player to another. The advantage here is that the system does not get so confused by rapid or unpredictable passes—indeed, this approach works well in basketball, where dribbling and occlusion can make life difficult for ball trackers. However, without physics-based constraints on the motion of the ball, these systems can produce inaccurate tracks.

Shenzhen is also, and only very recently, the hoverboard manufacturing capital of the world. In the smoke and asphalt of Bao An, a sprawling industrial flatland roughly the size of Philadelphia that serves as one of the city’s main manufacturing districts, hundreds of factories churn out much of the world’s supply of the boards, which are then shipped, rebranded, and sold around the globe.

Thursday, December 3, 2015

I feel like this might be the way well crafted open worlds are supposed to be experienced—not as gluttonous binges or narrowly focused rampages, but as long-term occupancies. I’ve found that these games exist more vividly in my mind as I embrace this style of gameplay. They grow in my imagination as they occupy more and more space in my memory. Instead of rushing through them or viewing them as content generators, I abide in them.