'cause it's hard to see from where I'm standin'

Setting aside for a moment the obvious charges of whitewashing, the controversy of which I can only hope hurts Paramount Pictures in the box office as much as that recent execrable Matt Damon flop of a movie, there are essentially two criteria by which to assess Scarlett Johansson’s lead in Rupert Sanders’ Ghost in the Shell: Its quality in comparison to Shirow Masamune’s seminal manga series and Mamoru Oshii’s classic anime feature film, and its worth as a stand-alone science fiction film. It fails miserably on both counts. There will be spoilers in this review, but nothing can spoil it more than what they did to themselves.

The cyberpunk subgenre of science fiction contains artistic works that have common themes – subversion of representational government power by corporate power, technological and cybernetic advancements that outpace and exceed regulatory authority or societal understanding, dense and sprawling urbanization – and are generally musings as to the nature of individualism and humanity in a world with fewer political and economic rights, a restructuring or revisionism of cultural touchstones, and constant contact with otherness. The story it wants to tell goes back to the heart of speculative fiction as a reaction to a world that is changing, and can reach topics such as corporate ethics and asymmetric warfare in Alien and Aliens, labor and civil rights in Blade Runner, and identity and trans-humanism in 1995’s Ghost in the Shell.

“Theme,” however, may be the wrong term, as a “theme” can be a backdrop without reasoning or justification behind it. Star Wars, at heart, doesn’t require its setting to tell its story and doesn’t particularly gain from it except such as to present lovely visual backdrops for what is essentially a very common story. That is why it is dubbed a “space opera,” and animes such as Cowboy Bebop are called “space westerns.” Just as noir had common themes of neorealism, expressionism and morality in an uncaring and often-times hostile world, making it prime for police procedurals, detective thrillers and gangster films, it’s used more nowadays as an allusion to certain stylistic options – dark and smokey interiors, stark backlighting, et cetera – than the ideas underneath.

1989’s manga and 1995’s movie contained thoughts on consciousness and individual identity if all of one’s body is mass manufactured – protagonist Major Motoko Kusanagi at one point spies a salarywoman with the exact same face, body and hair as her, and muses on reinventing herself – in a Japanese society for which dolls retain strong cultural importance, and which also has concerns about cultural identity under internationalist corporate hegemons supported by technology, a concern even today when “helper robots” are being developed partly because they are more amenable to a very insular people than immigrant labor. In the manga and anime, the Major works in a counter-cyberterrorism paramilitary police force where almost everybody has significant mechanical prosthetics either in part or in full, and whose aim is largely that of social stability, often in the face of foreign incursion or influence – many of the antagonists are seen as either Russian agents, American moguls, armed refugees of foreign wars, terrorists and the like – but the overarching ‘villain,’ so to speak, in the original manga series and the 1995 movie is a virus devoted to industrial and political espionage and manipulation crafted by an internal (and rival) military division for the purposes of influencing international relations that has gone rogue, achieved self-identity, and requested asylum, leading questions as to the nature of life and sentience.

The 1995 movie, its sequel, 2004’s Ghost in the Shell: Innocence, also directed by Mamoru Oshii, and the television series Ghost in the Shell: Stand-Alone Complex were incredibly dense with the characters’ philosophizing about such issues while also dealing with problems unique to the setting: mental disorders in an age of always-online consciousnesses, the ability to copy and implant memories and entire identities into surrogates both human and entirely synthetic, and scenarios there-in.

2017’s Ghost in the Shell eschews all that, and thereby solves the problem of perhaps being overly dense by being unconscionably dull. It reduces ideas to mere style, confuses reference with inference, and in doing so it says to me that the art that it portrays is intellectually, culturally and morally bankrupt. To add insult to injury, and there is great injury here, the movie itself is rife with one-dimensional protagonists and supporting characters, run through with massive plot holes, advances its plot by forcing its leads to make wanton and brazenly stupid decisions, and rejects any hallmark of the genre for a paint-by-numbers revenge plot that is as hypocritical as it is predictable. It, like Spike Lee’s 2013 remake of Korean thriller Oldboy, is the aping of a superior film by a so-called fan of the work while somehow missing everything that made the film worthwhile.

Mamoru Oshii’s Major Motoko Kusanagi was a contemplative, intelligent, supremely competent and capable female protagonist who had proper working relationships with her team, was raised almost all her life in a full-body prosthesis thanks to Japan’s legendary healthcare and thus inducted into what are essentially Special Forces due to her familiarity and skill with the body she possesses, and also possessed understandable concerns, interests and goals. Rupert Sanders’ Major, who for two thirds of the film went by some white name in deference to Scarlett Johansson’s white face, was a headstrong, reckless ball of anger who gave cursory lip-service to her status but otherwise treated herself as a Marvel Comics superheroine – perhaps because Johansson is already experienced at playing Black Widow for the current crop of schlock blockbuster action flicks – foregoing her team and getting herself captured at every turn, and murdering security guards, random thugs and basically anybody who looks askance at her with abandon. Once finding out that she is actually the brainwashed result of Evilcorp’s kidnapping and experimentation with a waifish Japanese street urchin named Motoko Kusanagi, Scarlett Johansson’s character can’t actually come around to call herself by that name, because perhaps even the actress is somewhat embarrassed by the stark contrast. She all but destroyed a strong female role, and in the most narratively simplistic copout.

The scenery is a pastiche of cyberpunkish stylistic touchstones of an urbanity devoid of any understanding of urban planning – indeed one area is simply called the “lawless zone” – or how anybody would feasibly live in such a city, which is something that the manga, television series and anime movies went out of their way to portray. The 2017 live-action movie attempts to shoehorn no less than four set-pieces from the 1995 movie, but without the context or competence displayed in the original, and to make matters worse, Mamoru Oshii’s trademark basset hound was also copied wholesale, in a stunning theft of artistic watermarking that makes me openly wonder whether the current producers actually understood what they were doing.

For the aped scenes, on every level is the point missed: In the 1995 movie, the Major finds herself hopelessly outgunned by a walking tank, but her motivation for being there – it’s guarding her target – and her actions in combating it – taking advantage of positioning to attack the vehicle it’s guarding, targeting weak points and forcing it to waste its ammunition, attempting to open its hatch to unhook its human pilot – denote a motivation as well as a strategy to actually overcome the problem. In the 2017 movie, the tank remains, as is the Major’s target, but the Major is already in possession of the target and thus has no reason not to retreat, and has no strategy to disable it except for firing wildly with a small-calibre weapon and pointlessly fiddle with the hatch as the tank is remotely controlled. The action remains, the thought behind it having been wholly excised and replaced with blind rage.

Similarly, her boss Aramaki in the manga and anime is a shrewd, responsible political strategist who does not put himself or his team in danger without a full plan in effect. His goals and the Major’s occasionally conflict, but they are understandable from his point of view and the means in which he works towards them are intelligent. In the live-action movie he passively allows all comers to dictate the parameters of his department, such to the point where he has no control over his subordinates and even allows a direct assassination attempt on his own person for no gain, just so a scene can be shown where he heroically saves himself. It is, I suppose, a testament to an American producer’s translation of this Japanese work that, like our current political climate, all characters are fundamentally incapable of acting competently and are thus forced for the vast majority of the film merely to react.

But far from the base flaws in filmmaking for which the producers attempted to hide behind a gloss of technical tinsel, and far from the complete misunderstanding of the setting, the format, and the characterization, the greatest sin that this movie has managed is that, as a science fiction film, it asks no questions and offers no ideas. Its characters aren’t just dumb by comparison, they are dumb, and seem content to remain that way forever. I watched this movie in IMAX 3D in a nearly empty theatre in Kip’s Bay, and had my eyes closed for a third of the film, for I could no longer bear to witness what was before me. It hurt me to watch this film, and I feel ashamed that I chose to do so, for I knew better, and now I have nobody but myself to blame for how I feel right now.

Unlike a number of other countries, our news media is entirely composed of private for-profit enterprises, which is why historically the city with the most newspapers – New York – is the one that invented what we call “yellow journalism” in the name of business competition and was a strong example of “tabloid journalism:” Fact-neutral sensationalism crafted specifically to entice readers, not necessarily impart information, so as to maximize newspaper sales, subscriptions and ad revenue. The name of the game was profit margins, as evinced in the very terms themselves: ‘Yellow’ because the cheap paper the news was printed on was yellow and ‘tabloid’ because the newpapers themselves were smaller with condensed print; both cost-saving adjustments incidental to the pejorative definitions they picked up.

In the enterprises on this front – in which the New York market exemplified but other markets also followed – competition required and requires slavish adherence to two principles:

a) The need to scoop stories the fastest, which puts pressure on fact-checking.

b) Embellishment and hyperbole just a hair’s breath short of the legal definition of libel.

There is a third principle, not strictly necessary but can be helpful, which is that of partaking in an overt political stance, where a paper can generate a market niche by catering to a constituency that no other paper caters to. This is not to say that such a political stance is necessarily ideological on the part of the paper’s publisher – quite the opposite; it is often-times business decision, a mercenary undertaking that can and has been shifted as markets themselves have – but it also has a bearing on how the news can be colored if not compromised.

While journalistic standards have since been codified – after all, the publisher Joseph Pulitzer who owned the New York World, a scion of sensationalist pablum, also established an award for integrity in reporting – if not universally enforced, the profit motive has never gone away, and we see it in varying degrees in just about every paper still in print, which means journalistic integrity has, is, and will always take second priority to financial profit.

By comparison the market for national television news was somewhat less competitive, being more of a cabal between the Big Three – NBC, CBS and ABC and their local affiliates – but it was Ted Turner in Atlanta that revolutionized the market and the manner in which television news was shown through the creation of the CNN, whose innovation was that of the 24-hour News Cycle. That cycle, unlike morning and evening papers or the evening television news, didn’t change reporting – because fact-finding can only happen but so fast – but it did change how the information was disseminated. Emphasis was given to two sectors, which are quite similar to the original principles, and indeed similarly non-conducive to journalistic standards:

a) The excruciatingly short deadline to be the first to report on a piece of news.

b) The need to fill all 24 hours with stuff that will glue people to seats.

The former has obvious effects on fact-checking – there is no incentive at all to fact-check, as it doesn’t matter how wrong a story is if it is incredibly popular and thus promotes ad revenue; it can always be “corrected” later on – but the latter only magnified the need for sensationalism. The network created shows like Crossfire and the Situation Room, in which any and all issues are depicted as “controversial,” with two opposing viewpoints, with equal treatment of pundits on each side of the issues discussed. This can be gamed, which is exactly what CNN’s progeny and main competitors Fox News and MSNBC did, which brings us to the second step.

Step 2: The Stupid

In cases of issues in which natural controversy cannot adequately fill the time – because there is already an expert consensus for one stance that cannot be answered by the opposition – the controversy must then be manufactured. The easiest and cheapest solution is to undermine expert opinion by literally giving time to opposing arguments, no matter how banal or insipid, and thus “even the playing field” by presenting conclusive scientific, sociological, legal or political analysis as unproven, if but for the sake of continuing the debate and thus granting a reason to keep watching.

This is lucrative so long as the opposing view has a market; ie: an audience. They will tune in to see their worldview defended, as political stances can indeed be sold – though in this case the media enterprise attempts to butter its bread on both sides by presenting both sides.

This of course has the adverse effect of undermining facts themselves, as by definition in this format they cannot end a debate with a clear victor, for that would cause one half of the audience to stop watching (and, arguably, the other half as well for after the controversy is concluded there is no ‘news’ to watch). Indeed, nothing can end the debate, because the debate itself is profitable for the private media organization: In fact, the more extreme the stance, the more emotional the response, and the more likely people will watch it. Scholarship is debased by design.

Step 3: The Evil

With such a system in place, it becomes patently easy for interested parties and propagandists to game media sources that are amenable and suppress the few that attempt to resist. The best way to defend a lie is to attack the very idea of truth, which is child’s play in the format by which Americans receive their news.

Need an expert? Pay somebody to pose as one. Fox News has so many discredited “experts” that an entire cottage industry – Late Nite contemporaries of Jon Stewart – has risen to quantify and criticize them, but that industry has had absolutely no effect on Fox News’ popularity or viewership: It merely profits off of the opposing view, for the simple reason that the debate is never concluded. If no expert is willing to lie on television, launder source material by reporting on reporting of bloggers and lumpenpundits: Effectively, wallow in rumor and hearsay.

Need to muddle an issue? Run counter-articles and claim that the opposition is lying and/or compromised. Because the industry runs on confirmation bias, people will accept what is effectively an auto-immune disease for investigative journalism because it bolsters their preconceptions. Breitbart and the Drudge Report have taken extreme stances that even the New York Post and the Washington Times have failed to venture, knowing full well that their readership will never abandon them, to the point where they will regurgitate articles from RT – the modern Pravda – derived almost entirely of anecdotes, misrepresented statistics or straight lies. Alternatively, simply just out-shout the competition: Internet memes, as evinced by the racist Pepe the Frog character, have been weaponized and can be produced and disseminated faster than anything ever before.

The danger of this situation is that its solution is not fact-based high quality reporting, because by its very nature it is quicker on the draw, cheaper and thus far more prolific than the effort and expense required for quality. It drags truth down on equal footing to lies and then outproduces its competition. It is still, at heart, a business venture. This is also why counter-propaganda fails to work: Liberal venues such as Buzzfeed, Vox and the Huffington Post have established their business models on this phenomenon, but they are not nearly as large, rich or as numerous as those on the right: They simply can’t compete for volume, though they have proven that even self-described free-thinking liberals can fall victim to confirmation bias, as in their zeal they also play fast and loose with fact-checking.

In such a manner these enterprises not only profit off markets all too willing to hear what they want to hear, but they have the effect of maintaining and cultivating those markets, creating a self-supporting propaganda machine that puts our facile and blundering attempts in the Cold War and the Second World War to shame, and absolutely dwarfs our comparatively cute attempts in the last century.

Big Smoke has moved from bigsmokestreetcorner.com to bigsmoke.nyc, following the decade-long wrangle the city had with having its own top-level domain. New York City now shares this distinction with other nominal city-states (Hong Kong, Singapore) as well as cities that apparently want to be associated with being tech savvy (London, Paris, Berlin) but as far as I’m aware it’s the only city whose domain extension is exactly three letters. It is also the only city to limit its domain to locals, so there’s a fair bit of impish glee in being able to snag one.

The old URL will redirect to the new one, and kinks will eventually be worked out. Eventually.

I found myself, yesterday, in a place that I, like any self-respecting New Yorker, tend to avoid like the plague: Times Square. I was there on a mission to capture the proceedings of an organization that rented an hour’s time on one of the giant glaring billboards in order to display something that wasn’t bright, garish, empty advertising. They were called See|Me, and they were going to display art.

This created a curious scene as New Yorkers came to loiter on the scene amidst the Disney characters, street performers, cops and ever-present hordes of camera-clutching tourists. This eclectic band also held cameras, but was comprised mostly of artists, and they were there to see their works displayed to the world – or a reasonable (or reasonably American) facsimile of such. Each would get their five seconds of fame, provided the dazzled tourists would care to look.

Comedically enough, it was the presence of the gathering of mean-mugging locals with their studied aloof mannerisms that attracted the attention of the tourists more than the works themselves. A tourist would approach someone with a camera pointed directly at the building-sized display and ask what they were doing. Taking pictures of the art. Oh, the tourist would reply, and walk on.

Prior to the event, a middle-aged woman with loose-fitting white tank-top came up to me and said, “you look like an artist. Are you here for the exhibit?” I was, silently wondering whether my studied aloofness was too studied, but she soldiered on and told me that one of her works had been approved for the exhibit, but then censored at the last second. She explained that such was because it depicted an oil painting she made of a woman in a see-through blouse.

I remarked that I found that funny, as the panel in which the art was to be displayed was currently busy presenting ten-story tall underwear models with obvious cameltoes doing acrobatic poses and looking longingly at the milling crowds below. Just a few blocks up was a hundred-foot pop singer whose latest album was being sold by her nudity, her arm draped across her chest, leaning against a headboard while lounging on satin sheets. Next to that was a lusty gaze from an airbrushed bimbo’s face promoting an ever-euphemistic gentleman’s club.

Even in Disneyfied, family-friendly Times Square, home to life-size Elmo and Buzz Lightyear, clearly sex, or at least the suggestion of it, is broadly accepted.

My newfound compatriot had, despite her rejection, decided to show up anyway. As she described, through her ill-disguised bitterness, she had to see just what on offer was deemed acceptable. During the proceedings, she was not disappointed: Indeed quite a lot of skin was bared, so long as the picture was cropped cleverly, or the model was twisted away from the camera, or any other means of suggestive trickery. We as a society appear to have been desensitized to the female form, and inured to female sexual suggestion, but yet display it as illicit in practice. We are a strange bunch.

One artist recently decided to hold a mirror to that particular neurosis by turning the tables on the subjects. Photographer Bek Anderson filled Rivington Design House on the Lower East Side with prints of nude male models two days ago. Not sexual, but very nude. It immediately drew ire from local prudes: “I guess the new people in the neighborhood are unaware of how many children live here.” Setting aside how tame this is compared to recent iterations of the Lower East Side, Anderson retorted, “There is nothing pornographic or offensive happening in that photo. It’s a portrait of a man. He is naked, but doing nothing indecent. We see naked women all the time in photos where they are highly sexualized and people don’t notice because they are desensitized.”

Indeed, now having been blasted by bouncing bosoms selling vacation destinations, jeans, music, airlines, soft drinks and candy – and that’s just one building – with little objection from the people below, I concede she may have a point. We have become accustomed to hypersexualized fantasy objects, but are inexplicably shocked by frank portrayal of real sexuality.

This barrier, among others, would not be broken down by the Times Square art exhibit, but then it would be asking too much for one hour’s worth of images to break down the perpetual onslaught of consumeristic vacuity before the masses, even if only symbolically. Indeed, five seconds for each particular piece of art was not enough to reflect upon it, and the artists down below were mostly (or merely) waiting for their piece to come up so that they might photograph it. Rather than stand against form, they became that form, their works made hollow, their messages muddled. Yet more grist for the mill of color and spectacle, no time for meaning or reflection.

Perhaps, then, it was for the best that the hapless woman’s piece be censored: At a stint of only five seconds, it would either be ignored or distilled into a flash of titillation, a conspicuous exercise in futility before an audience trained to react in only the most limited, pre-ordained ways. It probably works better as a story of controversy. Yet one more reason to avoid Times Square with a passion.

Matt Ashby and Brendan Carroll have taken aim at today’s Millenial counter-culture in what they feel to be “lazy cynicism” and a “recursive irony:” Co-opted by corporate forces and wallowing in their own ennui, today’s disaffected youth, they argue, are directionless and mere driftwood upon their artistic betters in the postmodern world. Irony is fucking up culture. It’s true: We certainly rely a lot on snark and satire, from the interminable pages of the Onion to the comforting glow of the Daily Show with Jon Stewart. When, they posit, will we snap out of it and start producing something substantively, honestly real instead of just cracking wise?

These men lack perspective. They quote David Foster Wallace and Thomas Pynchon’s prophecies of cultural vapidity and sneer at Tao Lin’s hipster self-critique Shoplifting From American Apparel with “New Tao Lins publish every day, feeding the culture’s desire to watch its own destruction,” but their criticism on the over-abundance of the Millenials’ directionless languor bears strong resemblance to that which the Boomers heaped on Generation X’s punks. Ashby and Carroll laud the inevitable counter-counter culture, in the form of ‘earnest’ postmodern art, but that path has been walked before: Though it came from the UK, Trainspotting is a good example of a stark reaction to presumed punk counter-culture malaise. Likewise, how else could William Wimsatt’s Bomb The Suburbs have been written, if not to highlight suburban ‘wiggers’ and the tragedy of those youth? But these, like Tao Lin, could not exist in any earnest way without acknowledging exactly why the aimless disaffection exists in the first place and why the first impulse is to deflect and mock.

Or, perhaps they could consider the Silent Generation’s criticism of the Boomers’ hippies, with Bob Dylan’s ironic co-option of folk music inflection as an explicit means to be seen as more authentic, much as a lot of today’s indie bands seek ‘amateur’-sounding recording sessions and emphasize acoustic instruments. Or we could go back to the iconic Rebel Without A Cause and discuss the inherent shortsightedness contemporary sociologists called the wave of Angry Young Men at that time. Consider Kerouac’s Beat epic On The Road, to which Truman Capote flippantly panned, “that’s not writing, that’s typing,” and the subsequent backbiting amongst critics on who was the bigger poseur, or the wise-cracking yet futureless delinquents Sondheim lovingly lampooned in West Side Story.

This is to say, it’s a generational thing, and today’s self-consciously ironic Millenials are no different in how they have chosen to deal with the world. Tao Lin’s apathetic pallor may differ stylistically from Chuck Palahniuk’s or Trent Reznor’s simmering rage, but it’s all equally masturbatory, or rather it’s all equally a coming-of-age thrashing about to come to terms with what is, at heart, a fucked-up culture to begin with. That’s why counter-culture exists, and the art simply reflects that. To demand that artists deal with it differently is a foolish request, for what that is asking is to pave snark over with smarm; a culture so obsessed with authenticity ought to know better. Indeed, that is Ashby’s and Carroll’s central premise:

“Dishonesty is the biggest obstacle to making original, great art. Dishonesty undermines a work’s internal integrity — the only standard by which a work can succeed… Irony alone has no principles and no inherent purpose beyond mockery and destruction. The best examples of irony artfully expose lies, yet irony in itself has no aspiration to honesty, or anything else for that matter.”

What, then, does that make Kurt Vonnegut or Joseph Heller? How is Jonathan Lethem ‘worse?’ American culture has a long tradition of sarcastic, sardonic, detached self-reflection. What was Hunter S Thompson pointing out if not the fact that that earnestness was also by nature self-destructive? We have, are, and will continue to muddle on. Today it’s hipster irony, which, as a means for a generation stuck in the Second Gilded Age while about to double-dip back into the Great Recession to vent their spleen, is a far cry better than the bullets and bombs they could very well pick up instead.

The tech world is one of the last true growth industries in the United States that requires skilled workers, and being part of it for a time I became concerned as to its ability to elevate general wages and in some fashion restore America’s middle class. In my eye, this meant union representation and collective bargaining: Something computer technicians in general are notoriously lacking.

One of the strongest opponents to any form of collective bargaining in my last job, however, were the Quality Assurance technicians. They argued that any industry that unionized or threatened to unionize would soon find itself outsourced, for lack of competitive ability, and that grousing about low pay and insufficient benefits was a straight ticket to mass unemployment.

At the time, before I got fired due to grousing about low pay and insufficient benefits, I found it odd that the QA men – largely first-generation Gujarati immigrants tasked with actually testing and debugging the software – would say this, for they weren’t in a good position themselves. Like everybody else, their job security was largely nonexistent, their wages required a second job or a second earner to make ends meet and, at this company at least, there was no promotion path beyond that job.

I posed this question to them, and while they conceded that the situation may not be ideal, they retorted that I need but look at the textile and auto manufacturing industries to see what happens when unionization takes hold. Indeed, the textile industry moved to Mexico and Honda is showing up Ford at every opportunity. One could not argue with the facts.

But where does that leave the country?

I could understand where these men were coming from, but I think a different model is a more accurate gauge of outsourcing efforts. Walmart and McDonald’s are the two largest private employers in the United States, followed closely by companies like UPS, Target, Yum! Brands (who run KFC, Taco Bell and a host of also-rans in the fast food industry), Kroger and Home Depot. There are indeed some tech companies in the top ten – IBM, GE and HP – but some of the biggest names in the industry, like Dell, Apple or Google, don’t even make the top fifty employers in the country.

The fact that there was no promotion path at our job should have been a clue. Why were the QA men the top technicians in-house? Because the actual programmers were fired several years ago when they asked for higher wages, and were replaced by contractors in Russia. Updates to our proprietary software required late night phone calls overseas and exorbitant shipping rates (and delays) of subcontracted point of sale machines (with parts from Taiwan, assembled in a right-to-work state).

The QA men were on staff largely to provide some sense of cohesion and inform the customer technical support – by far the largest department – of changes. The technical support were on site mainly because our clients didn’t like hearing accents on the phone, but tech support is the bottom rung of the ladder. The QA men couldn’t become programmers because the programmers were contracted out. The technical support couldn’t become QA because just about everybody was fantastically overqualified for their jobs and there was no space up the chain.

When the nouveau riche of the tech boom talk of opportunity, they’re thinking of the incredible difficulty of hiring top software engineers, such to the point that they’re reliant on H1B visas to fill the talent gap. There are too many people with English degrees, they say; not a large enough pool to draw from. In my office, however, there was an endless sea of graduates from computer science programs – not to mention former stockboys and shift managers from supermarket chains and distribution hubs who were working on their second bachelors – with an ever-lengthening set of credentials, yet they simply couldn’t get the work experience necessary to progress their careers.

Branch Rickey, former manager of the St Louis Cardinals and the Brooklyn Dodgers, knew the solution near a century ago. When asked why he had such good players on his teams, he replied that “luck is the residue of design.” In his case, that design was the minor league farm system, which created a local pool of talented baseball players to choose from. Instead of having a limited supply of good players poached by the richest of teams, he in effect created his own.

The reason it’s so difficult to get the sort of talent in the top positions is because the middle positions have been exported and as such largely don’t exist for up-and-coming workers – unless, of course, they are independently wealthy enough to intern for peanuts or strike out in business on their own. It should be that a worker can grow with the company and make himself indispensable by training and getting a greater understanding of the product in-house. That’s simply not the case now. As my former coworkers and myself search for our own opportunities, we have indeed observed a gap in availability between bottom-tier jobs and top-tier jobs: You can either be a cog or a consultant; there is no in-between.

My coworkers made the mistake of interpreting this problem as one prompted by unionization, but it is one of short-sighted corporate policy: We’ve gotten to the point where starting technicians get paid the same as stockboys, and while there is clearly an incentive among my cohort to get past that point, until there is a viable stepping stone to something better the increased competition at the bottom rung is only pulling us all down.

In a nutshell

Words of an urban indian. Musings on the nature of civilized society, city forms and bureaucratic processes, class and race consciousness, complaining, ranting and more ranting, along with whatever the hell else piques one's interest nowadays.

Categories

Meta

Calendar

Random Flotsam

To quote H. L. Mencken, "The government consists of a gang of men exactly like you and me. They have, taking one with another, no special talent for the business of government; they have only a talent for getting and holding office."