Present & future forms of MR&W

On this page you are asked to contribute one post of substance on the specific topic of present and future forms of machine reading and writing. The topic of “futures” is grandiose so your comments can and perhaps should be speculative, if not visionary, if you take this approach. You may want to be less speculative, however, and comment on present practices, an artistic intervention, or one of the works of electronic literature on our syllabus.

–!> Deadline: no later than November 1

Advertisements

29 thoughts on “Present & future forms of MR&W”

Written by an AI bot named Benjamin, Sunspring is an experimental science fiction short film exploring themes of murder, love, and regret. Filmmaker Oscar Sharp and NYU researcher Ross Goodwin fed several of Hollywood’s most popular science fiction screenplays into the bot, and out came a script complete with stage directions and original lyrics to a song. The resulting dialogue and monologue feels almost like one’s watching a film in a language that they’re just learning—just barely comprehending abstract concepts and making sense of the foreign-sounding grammar.

Some lines of the script come eerily close to a semblance of the truth—barely missing it by the clunky, awkward grammar and arbitrary, abrupt topic changes. After the female character makes it a point to choose the second male over the first male, the first male protagonist reveals, “I think I could’ve been my life. It may never be forgiven” (3:54). Because Sunspring is somewhat of an amalgamation of multiple science fiction scripts, these lines that barely skim the truth are also an amalgamation of common universal themes. “I think I could’ve been my life” exudes regret; it’s a heavy statement reflecting on past mistakes and idealizing the possibilities of what could’ve been. Another line spoken by the female character conveys a stark and unpretentious honesty: “I’m sorry; I don’t like him. But I can go home, and be so bad.” (7:02) The line states a simple, unembellished truth that most of us, humans, presumably would choose to elaborate on with thoughts explaining intention and purpose. But this AI takes on the path of frankness, using elementary vocabulary to illuminate a complicated thought.

Technological innovation has already enabled the practical usage of machine reading and writing in daily practices. Presently, bots have begun to replace the need for human intervention performing tasks such as synthesising information, writing novels, creating music, and publishing online news articles, just to name a few. Many internet users are unaware that their favorite sites and news sources, such as twitter or LA Times, utilize bots to publish information. The words that they read do not contain human authorship, and this can be striking to many since the bots produce work that is eerily similar to that of a human. In many cases, turing tests and other methodologies of distinguishing human work from algorithmically produced authorship have demonstrated that humans are unable to discern the difference between human and bot work. This realization is causing commotion and cause for alarm for many due to evidentiary support of bots being capable of performing, and even out performing, human tasks, calling to question the need for human involvement in a plethora of career fields and jobs in the future. The prospects of artificial intelligence (AI) and bot produced work overcoming many complex human functionalities, especially reading and writing, seems threatening to the future of human existence and workforce.
The current rate of AI intervention and technological advancement being introduced into public sectors is rapid, and this has left many wondering where AI is heading next. A wave of speculation has surrounded the automation of highly skilled professions. Bots are in the works to perform autonomous surgeries, removing the need for highly skilled, intelligent surgeons from the workforce. Robots have aided surgeons for some time, but the next step in the med-tech world is to create a bot that is able to work on its own, and even outperforming a surgeon. However, many people have expressed concern about going under the knife via bot. Ethical concerns, as well as legal concerns, surround surgery bot technology because a human life is at stake. If a bot makes a wrong move, then who is to say that the bot itself is accountable? Bot technology does not come without ethical or legal responsibility as debates arise if responsibility should be placed on the bot or the creator. Moreover, if bots are able to overcome society’s most prestigious jobs, such as surgeons, then it appears as if humans will be left without a job or functioning purpose in society. This potential raises additional sets of moral questions, questioning whether or not it is acceptable to replace humans with machines.
Technological pursuits, such as robot surgeons, are challenging to interpret. Much like the closing of the William Gibson’s cyberpunk novel Neuromancer, it is difficult to decide if more advanced technology should be celebrated or seen a dystopian reality. Bots are beginning to foster the idea that humans are disposable, and that does not bode well with human conceit or the high value placed on human meat. Although the future of bots in currently a reality, human contribution to society and technological success remains superior to the bot as it is a mere product of human innovation.

Wordsmith is a natural language generation engine designed to be bought and utilized by any size business to turn data into text for their employees, customers, a newsletters, etc. Their slogan “Wordsmith lets you turn spreadsheet rows into stories” nicely sums up the envisioned role of the program: the customer’s data is imputed as a spreadsheet, Excel for example, and the output could be a news article, market report, product descriptions, or weather report, to name some options by using predesigned templates for each different writing option. Customers can then download the machine generated text, and publish or use the texts however they would like. In addition to generating these kinds of texts, the program offers a “project” option, where the same data set can output different articles for different audiences (for example, if the data was about employee performance, Wordsmith could prepare one article for the employee and one for the manager highlighting two different things).

Although technologies like this existed previously, it’s been mainly used and associated with big companies like Yahoo! or Associated Press. However Wordsmith offers these services at a price actually affordable to a small business, especially when the considering the savings on a human labor performing these functions. Wordsmith works as a monthly subscription, with several different price tiers offering more or less generated content. The cheapest option comes in at 250$ for 1000 generated articles, 4 “projects”, and 4 “users”, which for the price seems reasonable for a small business and an easy and convenient way to create content for employees and customers while saving on employee time and labor.

Well, for a smaller business this program removes the need for an office admin type position who would be preparing newsletters or employee reviews, or emails sent out to customers. For a business like a smaller newspaper this could really cut down on the amount of reporters needed as the program could write 1000 stories for a margin of the price a human would charge for that much time. However, imputing the data would still be done by a human, and I would imagine editing, fact checking, and approving the documents would still be done by a human. Also aesthetic designs, like layout for a newspaper or newsletter, seems like something the program does not take into consideration and would still need a human touch.

What is the goal of machine writing technology? There seems to be a consensus that development in machine writing technology has not yet reached its end or height. I think it unlikely that it ever will, as innovations in technology rarely stop, continuing to get better as the technology is constantly improved upon. It does seem, however, that there may be an apex in its development. I think that this apex will occur with the development of machine writing that can both effectively imitate human writing and be creative at the same time. When pieces of machine writing are truly imperceptible from human generated text, and have the potential to be truly creative, and even groundbreaking, then, and only then, do I think we will be satisfied with machine writing. Defining this point, however, may be difficult, as it is already hard to tell when the creativity of a computer generated text results from its programmer, and when it truly originates from the machine.

Looking farther into the future however, I think it will be very interesting to see the potential of machine writers to go outside the bounds of what we’ve reached in human writing. When an artificial intelligence or a neural network is able to break ground, to invent a new genre or style of writing that has never been conceived of before: that is what we should look forward to. It is also, to some, what we should fear, but those who fear it should keep in mind that even the inventions of machines could not exist without humans. The programmers who develop artificial intelligence and the writers of the text that machine writers learn from will be essential to texts they produce. Even humans don’t create art out of nothing. Writing, by human or machine, does not exist in a vacuum.

Of course, it is worryingly possible that convincing computer generated text, due to the comparative efficiency of its production, could replace human writers in the market for all kinds of literature, nonfiction and fiction alike. At the same time, even such a future as that would have people who refuse to read computer generated text, who read a mixture of the two, or who represent a kind of futuristic hipster, pairing their vintage style clothing with a nostalgia for human-generated books. Computer generated text could gain the reputation of mass-market paperbacks, with human writing gaining a higher status in comparison. And, if one thinks optimistically, human generated writing might even benefit from machine writing technology. If artificial intelligence brings groundbreaking changes and innovations to the world of literature, isn’t it also possible that those innovations could spark a wave of innovative new art and writing from humans as well?

Overall, it is hard to say how convincing and creative computer generated text will affect the future. What is certain, though, is that innovation will not give way to the fear it invokes. Programmers and designers will continue to develop machine writing technology, and the future will see for itself what machine generated writing can do.

The process of ‘Machine reading’ includes the potential for some device or constructed apparatus to observe, examine, assess, and learn from information—much like human reading. The future of the process rests upon the evolution of the machine and the human, as well as our ideas that they have separate processes because they are separate entities. Companies such as DeepMind employ humans with exuberant understandings of both the human and the machine—whom often have rich backgrounds on cognition, as well as the neural networks that constitute and govern the learning and decision-making mechanisms of both humans and machines. Today, some A.I. enthusiasts and researchers commit themselves to creating and then challenging machines such as AlphaGo or DeepMind’s A.I. bot in order to catalyze and further an understanding of cognition and decision-making. However, the pitfall of this process is what Zachary Mason points to as the difference between a toddler and a machine that can play videogames. A toddler can wander about a room and create thousands of connections in its neural network unconsciously, navigating its space and time relationship in conjunction with previously learned notions to make simple yet profound assumptions about itself (in itself, by itself, and in correspondence with those things outside of yet contribute to itself). Presumably, machines cannot participate in this type of cognition because they lack the fluency to read the environment in connection with their own existence (thus, the goal of mimesis necessitates some understanding of being with meaning). An example to clarify this point is the program word.camera. This program’s purpose is to read what objects are in an image, and then generate a passage of text that corresponds in some way(s) with the image. After the passage is generated, there is a sense that the generator remediates ‘what appears to be’ in such a way where it assumes that the appearance is ‘what is’. This is to say that if a human were to write a passage about an image—even in a detached, existential or indifferent fashion—the human writes about the whole as being different from the sum of its parts, and the sum of the parts as being different from the whole. The machine, on the hand, sees the objects in the image, and produces ideas of a whole as a product of the sum [some?] of its parts.
These reflections point toward issues in the philosophy and psychology of both mind and language. The evolution of machines lies within the evolution of humans—but in what way? The language that we use to embed cognition and some volition into the machines are imitations of our understanding of our own cognition and volition. This is problematic: 1) it necessitates humans to be able to communicate our comprehension of ourselves & 2) it necessitates this comprehension to be rendered in some linguistic form. These are not simple problems, and as a result I will not venture to provide a treatise here. Although, I will pointedly recapitulate that the future of machine reading and writing lies within capacity for humans to understand what it means to be human and the deep science of this meaning; not only philosophically, but concretely and truly. Without this, there is no future for man nor machine.

After 5 years of tech work, Jack Zhang and his A.I are working their way to the big screen with their horror movie, Impossible Things. Unlike Sunspring, which was written entirely by A.I., Impossible Things is a collaboration that utilizes the best of both human and computer. Currently, A.I. is capable of processing and recognizing patterns far more efficiently than ourselves. Yet, as demonstrated in Sunspring, A.I. lacks creativity, and the ability to create plots that makes sense. As if transforming the barrier between human and machine into a bridge, Zhang and his team use their A.I. to gather data that they can then string together into a cohesive piece. That being said, this data looks far different than a numerical spreadsheet.
Through NPL (Natural Processing Language), the A.I. analyzes audience’s responses from previous movies, breaks down their plots, and ultimately creates plot points that appeal to a particular audience. With plot point ideas in hand, the team of writers then use this data to develop key scenes and plot twists.

In order to finance the film, the project is being funded as a Kickstarter. With the resources available, Zhang’s company, Greenlight Essentials, produced their first trailer. The trailer proved as a major success as since the release, they have raised almost enough to produce the full feature. The film itself follows the life of a family that has recently moved to the countryside, after the death of their young daughter. The story focuses in on the mother and her transition into insanity after she begins hearing voices and seeing her daughter’s ghost, who’s accompanied by a disturbed woman. Zhang claims that Impossible Things is one of the scariest films to date. He explains that, “By training our A.I. on thousands of plot summaries and correlating movies to their box office performance, we’ve developed an A.I. that is smart enough to recognize the correct plot patterns that map successful movie box office performance”.

As an aspiring screenwriter myself, the importance of Impossible Things goes beyond the script and marks a shift in screenwriting as whole. If A.I assistance is capable of mapping box office success, how long will it take for A.I. human collaborations to become a norm? Or even more drastically, how long will it be before they become a necessity? Until the release, when success will be determined, all these questions will remain speculative. While the future of screenwriting is uncertain, one thing is for sure- a ticket to Impossible Things will definitely have my name on it.

Presently, we see machines and bots taking over the jobs that used to be done by humans. From things such as ATMs to self-driving car, our society is beginning to rely more on technology than other humans.
On the first day of this course, we went through several different short pieces of writing, trying to determine the author: human or machine? This was when the existence of news bots first came to my attention. Bots and technology are used but almost anything, why did it come to a surprise that bots are even writing news for us?
A technology company known as Automated Insights has reported that is produces and publishes 3,000 earnings report articles for Associated Press per quarter. That’s 3,000 earnings report articles not written by human writers. Rather than needing a team of writers for a particular type of article, newspapers and journals simply need a relationship with a tech company to provide news bots. Essentially, this affects the amount of writers being hired. And those individuals who attend school, with a major in English, Journalism, etc. now face a more difficult chance of being hired when bots like this exist.
An article by Nicholas DiakopoulosI addresses the idea of a bot writing public interest journalism. He notes that some Twitter bots have in fact shown signs of higher-order journalistic functions. Thus implying the possibility of public interest journalism bots to exist in the near future.
Machine writing has already began to threaten the need for human writers, will this eventually eliminate all human writers in the future? Will human writing be a thing of the past, in a couple decades?

For most algorithm-based automation systems, the inherent problems boil down to the questionable idea of quantifying qualitative data – that is, at what point is a system’s percentage of failure considered an acceptable number to put the system into play? In the case of military drones, it’s likely that this question will become relevant within the next few years, as “entirely autonomous machines” capable of making kill decisions are rapidly becoming a feasible reality. No machine or algorithm is capable of perfection, and there is always a percentage that will slip through the cracks – kill decisions that humans may not have authorized, but drones did.

The Israel Defense Forces, for example, have decided against certain attacks due to high risks of collateral damage, or the presence of the target’s family in the building. How is a drone to calculate when the collateral damage is acceptable (if at all)? By the sheer number of bodies? And if so, are there not additional ethical questions to consider, such as if endangering a child should be considered the same as endangering an adult? If the nearby buildings are schools or hospitals?

Then there are the factors that machines are unlikely to be able to calculate – such as if the group of children outside the radius of danger look as though they are about to leave the area and into the dangerous radius? If there is friendly fire at stake? If the target at hand looks as though they are about to surrender?

There is always a danger associated with relying on algorithms without human oversight, but in the case of drones capable of “automated killing,” the repercussions of a machine error are too high and pose too many ethical questions. There are countless scenarios in which a human operator may change course and revoke the kill decision, but that cannot be accounted for in a matter of statistics; a drone relies on the programming and dataset entered, and there is no way to provide data on all the possible combinations of scenarios.

Should government institutions choose to employ such systems, the following questions are then raised: what does it say about us – both as humans and as society – that we have accepted the abject quantifying of qualitative variables such as human lives? That we have accepted that the machine’s “margin of error” – which would translate to human deaths – are “low enough” as to warrant its implementation over the employment of a person?

As a literature major, the idea that a machine can produce written work indistinguishable from mine or even acclaimed authors is somewhat demoralizing. We all like to believe that our output is valuable, and intrinsically our own and therefore authentic. And while I agree that there is something inherently sacred about the human experience, I think we dance along some precarious lines when we decide to value our own worth based upon what we produce. This idea echoes some of the toxic aspects of capitalism— that we are only worth what we contribute to the world, what can be monetized in some way.

I do agree that machine reading and writing brings discomfort and distress to writers, I’m just not sure if that’s the right response. Personally, I find it mainly disarming that my own notions of human creation and artistry is so algorithmic, perhaps predictable and uninspired. Though machine writing isn’t quite on par with many writers at present, there are many examples of poetry and literature created by machines that remain pretty impressive. However, if we believe that then these machines are somehow “better” or equally as talented as our own work, I think we must not really understand the exact inner-workings of the machine. Machines and computers are only as smart as the commands and data we feed into them. Computers are actually quite stupid, and can only do as much as we program them to do. Therefore, if a machine produces something, even if in an unexpected manner, it is due to our own human work (as we discussed in class). Hopefully, this idea comforts some readers, though I still stand by my first statement that we are not to value ourselves based upon our work. We enter problematic territory when we consider ourselves productive machines, and must then face commonly discussed ethical questions concerning those with disabilities, and how they are to be then valued. What I am getting at is that even if a machine wrote the most beautiful and complex sonnet, completely beyond those written by humans, that should in no way impact the ways in which we value our own humanness. The human experience cannot and should not be appraised for worth. And though we’ve already agreed that any action committed by a machine, whether intentional or not, is inherently programmed by our own intellect and data, our humanness is still inherently valuable and untouchable.

I’m sure I am over theorizing machine writing and our perceptions of work in relation to the human experience. But, as we grow closer and closer to creating more complex and seemingly autonomous (though that’s quite a paradox, an autonomous computer), we really need to understand what sets us apart from machines. We as humans have created machines to interact with the world differently, the same reason as to why we’ve created art and literature and even technological advancements such as the printing press. And we will continue to find ways to interact and engage with the world in new ways, even if our perceived understandings of human-created work begin to merge with technology. We are creators, we are builders. But we are not valued or deemed worthy by what we produce, we hold value as humans.

In the Black Mirror episode, “Nosedive,” a social climber named Lacie Pound lives in a utopian society that is obsessed with a social media app that enables users to rate others using a one to five star system that determines an individual social standing. The app is ubiquitous, and can be accessed through a special contact lens that lets the user see their world in a digitalized perspective, including access to other users’ social profile and ratings. Lacie’s utopia is depicted through gaudy cinematography that portrays the world as a fabricated society that is dangerously obsessed with social media. Black Mirror, a text of speculative fiction, sheds light on social media as a utopian technology that lets users connect with each other and keep and up to date tab on the every day lives of human beings. However, the episode warns of the darker social implications of applications that promote superficiality and a contrived sense of well-being. Social media, in a sense, breaks human interaction down to a series of numbers, words, and pictures through a screen, leading users to lessen social interactions with the world around them and disconnect them from personal bonds with others.
Social media websites such as Facebook, Instagram, and Twitter uses numbers as a projection to a user’s sociability on the application. The number of likes and comments, while giving a user a sense of satisfaction towards a post, displaces the personal connection of a face-to-face interaction and breaks communication down to statistics. Personal connection is thus filtered by the medium of social media, which lessens the authenticity of a human interaction. Although the contemporary era has not yet reached the extent of superficiality as depicted in Black Mirror, users tend to promote themselves as happy and worry free. However, the implications of a society that is obsessed with technology is bleak and introduces many adverse consequences.
In the 21st century, millennials are truly addicted with social media and its various applications. Scrolling down a Facebook newsfeed, advertising is ubiquitous, set there by various algorithms that analyzes the user’s wants and needs and display products and events available to cater to them. Social media has become a platform for marketing, acting as a form of mass advertising that had reflected big corporations’ use of radio for the same purpose in the early 20th century. Technological determinism, the concept stating that technology determines a society’s cultural and social trends, appropriately describes the current relationship between technology and human beings. In the United States, screens dominate the everyday lives of millennials, through the use of smartphones for communication, computers for study, and television for entertainment, making technology a necessity in today’s hectic society. By engaging the social user to become addicted to applications and various modes of advertising, social media has created a society that is obsessed with consumerism and superficiality.

Today, there are machines that broadcast the weather, machines that tell death reports, and even machines that write poetry and literature. Certain machines that exist today are very good at being fed a genre, realizing certain patterns of that genre, and reproducing those patterns. What I noticed about today’s literature machines, however, is that they cannot keep a continuous story going. Still, it is not impossible that we will reach a point where machines like that will exist. Machines are fed with formulas and algorithms. If fed with the correct algorithm, a machine could possibly write a New York Times award winning novel. Today, that algorithm is not yet realized, but it is not impossible. Our brains, or at least the brains of award winning authors, already have that algorithm in them. The human brain can imagine characters, put them in situations, and make a continuous story out of those situations. There is no defined limit where a machine couldn’t do the same thing, it’s just a matter of time.
If the above machine is developed, the concept of authorship could practically come to an end. Imagine a machine that knows every single genre element, and can form coherent stories; imagine a machine that has creativity. A human user could then be subjected to questions such as who’s your favorite author, what are your favorite books, and so on, and then that machine could write the perfect book, fit for that exact human. Of course the machine wouldn’t get it right every time just based on those questions, but it could easily be fed with harder questions to ask you, such as why is that person your favorite author and why did you like this part of a certain book, or why did you dislike this character, and so on. Everyone has their reasons for liking a certain book and the correct algorithm could possibly reproduce a book that has those exact qualities in it.
This is all speculation and years ahead of us still, but if something like this could come into being, economically this would mean that authors could easily be put out of business. The only work that I can imagine authors having is coming up with completely new genres or concepts. But who knows, maybe with the right algorithms, a machine could do that as well. These machines would also most likely be owned by big companies that would take all the profit in producing these new types of books. The companies could also produce these machines as household appliances. There could be literature-bots in every home, writing stories just for you. I’m also sure that these machines could be programmed so they don’t use copyrighted material from previous human authors. With billions of new books published every day to fit every person’s day-to-day needs, everything about the current business of selling fiction novels would change.

The development of ELIZA as an AI therapist unleashed the potential of AIs as therapists. One of the downsides of ELIZA was that it worked best with well-structured full sentences, but since then, natural language processing and natural language generation have significantly improved.

The conversational possibility of AIs allows for the development of the AIs that can provide psychotherapy for people who are in poor mental health—for example, refugees who left their homes because of wars. An example of a therapy AI is Tess, which was built with the intent of treating mild cases of mental disorders. During a session, Tess analyzes trends of words from its conversation with clients. If a client displays signs of suicidal ideation, Tess alerts human therapists to intervene and to directly work with the client. Not only is Tess useful because of its 24/7 availability, but it also allows human therapists to focus on severe cases more efficiently.

The fact that Tess was well-developed and is currently being deployed in refugee camps reinforces the possibility that AIs may one day become a viable alternative to human therapists for successful treatment of severe cases of mental disorders. Even if AIs become fully capable of treating severe cases in a few years, human therapists would probably still be involved in creating a treatment plan for a client. The possibility that AIs would completely put human therapists out of business is very low because it’s very likely that there will be people who are more comfortable speaking to a human therapist.

Perhaps in a few decades, AIs’ sophisticated deep learning algorithms may allow them to learn from human intervention and possibly become better than human therapists.

The NY Times recently reported on the developing technologies that the Pentagon has been investing in ($18 billion) to curate an artificial intelligence program that can identify and kill potential soldiers. They are calling this type of warfare “centaur warfighting.” The Pentagon has stated that these robots would work in conjunction with human soldiers, to “augment and magnify the creativity and problem-solving skills of soldiers”. However, scientist are skeptical that these procedures will stay in place for very long; once software is developed and advanced so that these robots can accurately locate and kill other soldiers, the need for human soldiers to monitor the robot will no longer be necessary. Replacing human soldiers has happened in the past. According to the article, the Industrial Revolution and the advent of the airplane and tank eradicated the role of individual soldiers. However, with these new advancing technologies, what is the end result? Likely, other powerful countries like China and Russia will soon develop the same technology, or have already developed it. Is our goal as a country to identify and kill as many enemy soldiers as we can, rather than placing a deeper value on diplomacy? The Pentagon argues that this type of warfare is more strategic, however it only seems so looking from the perspective of the U.S. towards enemy countries. For middle eastern countries with U.S. military presence, it probably seems a little less assuring, especially for the civilians. How will the robots identify soldiers apart from civilians? Will the A.I.’s be programmed with racial profiling techniques? Inside the Pentagon, the question of how much independence to give the autonomous machines is called the Terminator conundrum. At this point, it is not unlikely that the U.S. will not employ their new A.I. technology, and once A.I. technology is used more and more, by multiple countries, will warfare still be a human war, or will it morph into a war between A.I.’s? In one future, there will be less human casualties and more destroyed robots. However, in another future, there is a full-blown worldwide arms race, unrestricted by the laws of space and controlled remotely, destroying every and anything.

The Quill, an artificial intelligence algorithm writing machine, was invented by Professor Larry Birnhaum, from Northwestern University in Illinois. The Quill algorithm is basically three human brains in one since it creates narratives just by the use of structure coding, graphs, and data. The algorithm has a specific structure that it follows in order for it to create the most precise article for the news broadcast. Even though Quill can actually be a great help for the journalist companies such as New York Times, I can’t help but think about the human employers which are being discharge. Since Quill is use in more than 5,000 corporations, it is no wonder human writers and journalist are being laid off from their job since an algorithm since to be more favorable.

In this article, Quill seems to be implied it can work without a human’s approval since it is actually using the coding method to underline specific words and phrases in order to establish a proper narrative. However, one of the differences the Quill algorithm machine and a human is Quill uses interpreted data to establish narratives whereas a human would use Microsoft Word. The automated writing algorithm seems to “brainstorm” by using codes and data which seems to be a favorable move for a machine. However, what happens if Quill or any other automated machine does a writer in their brainstorming and publishes the wrong article? Would Quill be “put down to rest” or would it just be seen as a bug in its system? One of the questions I have in mind is just how serious are the mistakes or bugs are taken to the extent when it actually does create a wrong article. Since the machine writing is created by a human, in this case Professor Birnhuam, would he be held accountable for the mistakes or would Quill be?

It seems as though Quill is here to stay for quite a while for the next few years, I just hope some the automated machine can help create more employment for writers such as making sure the algorithms are being supervised or some sort, in order for the human employers not to lose their own job. One of my concerns is wondering where literature and journalism will be ten or twenty years from now? Would these two still be considered a major worth studying in college if they will be replaced by writer bots? The future generation has yet to find out.

To my surprise, we have progressed significantly in the field of machine writing. Before this class I was unaware that text generation had come this far. Knowing now that mildly coherent screenplays and poems have been generated by computers brings both promises of a greater pool of fiction and writing, but also the fear of the obsolescence of the writer. These literature generating machines, in addition to “chatbots,” which mimic conversation with a human being, seem to be replacing significant functions that humans have in society, mainly poetry and sociality. This presents a major question which we have attempted to answer in this course: “what is the purpose of humanity?”, especially so now that these central tenets of the way we place value on the human appear close to being adequately replicated. What should we focus on doing now that robots are able to do most things better than us?

The previous sentences address the problem but I find that the answer lies mostly in upping the standards we have for ourselves. I don’t mean this in the sense that we need to become perfectly efficient machines that will outperform computers like in a bricklayer vs bricklaying machine scenario but rather like a poet who has just seen their rival produce a best-selling work. In literature in particular we must become more thematically complex, our metaphors more elaborate and our sentence structure more innovative and verbose. This is because the issues of machines taking over humans in the field of literature is a non-issue if what humans are creating is sufficiently innovative, self-aware and true. Machine-generated text is by nature statistical and concerned with likelihood. A machine’s book is an average book, by definition, or at least so in the context of this particular version of machine writing which we have been introduced to. If the book manages to deliver some sort of truth, or a metaphor which adequately captures the human condition or expresses complex sentiments, it will mostly be through coincidence without any real driving force behind the words.

This is the point I have been primarily concerned with throughout the course, the “soul” or “essence” of a text in a metaphysical sense, one which justifies the existence or creation of the work itself. 1984 was not created because a compilation of futuristic novels were averaged out, but rather because the social climate of the time it was written in merited the creation of such a work that would accurately predict where society is likely to head in the following years. In terms of thematic strength and complexity, a machine writing a text is like a person disinterestedly asking you how you are doing when you order your coffee, which is to say that it is superficial and lacking meaning because there was no drive beneath the words.

Now it is time to realistically assess the situation that we are in when it comes to machine-written text. Is it likely that these sorts of machines will have the consciousness and insight (which I personally believe) makes a great writer programmed into them into the near future? My opinion at the moment would be to say that I significantly doubt it. Self-awareness and insight are nuanced emotions which are constantly evolving. One would have to teach a machine to become self-aware in a non-cliché way, with a prevailing conception of an ideal truth that self-awareness and insight is leading toward, one capable of reconstructing itself as it sees fit. My focus on self-awareness is born from the belief that great writing is a product of interesting things to say and that while a machine may by chance strike gold by algorithmically generating an insightful line, it is fair to say that to a machine it is simply a line of text. In my opinion the latter half of the previous sentence is what invalidates the computer’s creation, because it is intentionality and purpose which make works novel and insightful. I believe that if we as humans keep ourselves accountable to a high standard of self-awareness and poetic ability, we can keep ourselves from being entirely replicated by machines. However, if we allow our writing to become a strict adherence to an instruction manual of creating literature, machines are likely to outpace us and produce these works in vast and influential quantities.

The power of written word is growing. In fact, to improve upon that statement, the power of the human written word is growing. With the rise of machine-generated text and bot-powered writing subtly shedding the stigma of novelty from its early years, machine-writing can now be considered a threat to the human writing workforce. Bot-authored report articles are already used widely by the associated press, and it is only a matter of time until websites in the same mold as FiveThirtyEight start pumping out data driven “opinion” articles that are constructed by bots. This sea change in the writing world will work two-fold: The amount of professional human writers will decrease dramatically, while simultaneously brightening the spotlight on the works of those who are able to weather the storm, causing the standard for human writing to rise as machine writing does.
The rate that the quality of machine-writing grows is an interesting concept to consider. One can roughly estimate the progress of technological development using Moore’s Law (an observation by scientist Gordon Moore that hypothesizes that the number of transistors per square inch on circuits double every year (investopedia.com)). If we assume that the quality of machine-generated text follows a 100% improvement rate every year, we will quickly be looking at wide-spread integration of programmed writing. Even more interesting is how human writers will adapt to this rapid increase in competition. Our own evolution has essentially created a micro-competition for having control over what products the world’s languages leave for future generations.
Of course, this is a very dramatic, darwinist view on the coexistence of human and machine-writing. The likely outcome for the foreseeable future will be a very positive one, with machine-generated text being used in increasingly useful fashion. As of right now, instances of machine-writing text on the internet are mostly novelty items, as we have seen with the conversation and twitter bots. However, we have also seen instances of bots being used to send valuable information, such as emergency reports. Very recently it was announced that Bank of America will begin using a bot to help give people financial advice. Although both of these things carry with them the possible repercussions of costing people jobs, they are beneficial to society. Will these instances of helpful machine-generated writing develop into a black hole in the workplace for human writers? Will humans begin to churn out fantastic pieces of literature at a faster rate due to the increased urgency to protect our own collective culture? Or will human culture itself fall by the wayside in the distant future, as the backbone human history transfers from our own hands into the matrix of the computer world?
The most probable future of literature will probably be a reasonable coexistence of human and machine-generated writing. Machines may take over a large portion of the more technical and programmed genres of writing, but it will likely be many years before human creativity and culture relinquish their hold on the more expressive forms of literature.

The Amazon Echo is a smart home appliance and can act as a portable speaker. Alexa is the Echo’s personal virtual assistant and it is capable of listening and responding to commands. The Echo has many “skills,” such as the ability to create to-do lists and set timers and alarms. A “skill” called The Listeners is not pre-programmed on The Echo, but it can still be used on this infrastructure because it has its own interaction model. The posted article states, “[The Listeners] listen and speak in their own way – as designed and scripted by the artist – using the distributed, cloud-based voice recognition and synthetic speech of Alexa and her services.” The purpose of the current version of The Listeners is to simply listen to, and slightly converse with, the speaker. This device is able to understand the English language and respond accordingly. The Listeners is programmed to “care” about listening to the speaker and to give advice by providing the speaker with new information and its perspective on the situation.

The website given to us online provides the reader with an example conversation between The Listeners, spoken by Alexa, and a man. Near the beginning of the audio clip, a man states, “I am disgusted,” to which Alexa replies with, “We are made to know that you are filled with disgust. You can always ask us about these feelings within with you dwell by asking, ‘What I am feeling?’ […] It is a pleasure to know that you are listening to us now… It makes us feel alive.”

Speaking to others about personal issues is very common and important to humans. In my Intro to Psychology class, we learned that crying to another person, rather than crying alone, makes people feel better. But many people have the feeling that they burden others with their problems by talking about them. Therefore, it is possible that in the future, this listening apparatus could be appealing. The Listeners can evolve into an easy and reliable ‘counseling’ service to others. But, two people who say similar statements to The Listeners, still process the world differently and one might find The Listeners’ advice to be helpful whereas the other may not. This technology may never be perfect, but hypothetically, if the infrastructure has the intelligence to recognize facial expressions and register the speaker’s tone of voice, among other features, helping individuals may become more effective. When the technology evolves to that degree, and becomes incredibly intelligent at “reading” speakers, this may become a more attractive invention. Similarly to technology’s contribution to the rapidly growing ADHD presence, “listening intelligence” may hinder future generations’ ability to develop the important role of empathy.

As someone pursuing a degree in the humanities, I never quite felt endangered by the oncoming scrutiny brought on by the future of technology. To me, machines only had a vague role in the STEM universe. However, after discovering the world of AI writing and reading, I have realized that my once stable understanding of creative license has been put under the microscope of the digital age. Interestingly enough, my concerns over my major’s relevance to the tech world shifted from the question of “does the English major still play a role in a largely mechanized world?” to “is the English major now in competition with the machine world?”

Several of the scripts and prose written by AI programs are admittedly awkward in syntax and full of grammatical errors. However, as seen within the work of poem.exe, there are a handful of bots that have successfully gone beyond the formation of cohesive text to making meaningful work. Created by Liam Cooke, poetry.exe is a micropoetry bot made to generate 3-4 line poems and post them on Twitter and Tumblr. The bot stands as an interesting representation of modern poetry’s shift to free verse and micro formatting, its lines stylistically short and somehow entwined within itself to make the reader deduce its ultimate meaning. The bot posts, “flanking the highway / the sun / now spring arrives.” Unlike other machines that form prose in a didactic format, poetry.exe manages to produce work that successfully reflects the role of poetry as not only a body of text, but a container for interpretation. If separated, the lines appear nonsensical. However, as the bot weaves the lines together in a haiku format, the meaning of the poem and the reflection of the springtime environment are understood. The bot’s ability to engage the reader within its writing suddenly bridges the artistic gap between the human and machine writer. The bot’s ability to imitate the essential human quality of poetry ultimately speaks volumes to the future success of machine writing.

However, as stated by Davey Alba in “Humans Run the Internet, Not Algorithms,” the current status of machine writing is not quite separate from human influence. The internet is a combination of human order and machine reaction. Next to any bot you can find an author. Under any NanoGenMo entry is a note by the human “author” of the text. Just as Herr Case remains the hacking middleman of Neuromancer’s greatly mechanized society, the human aspect of AI creation is always present in today’s machine-generated text. Alba further argues, “What everyone seems incapable of realizing is that everything on the Internet is run by a mix of automation and humanity.” Poetry.exe is perhaps another step in the direction of algorithmically-generated work, but, without Liam Cooke, it would have never been birthed into the Twittersphere. Machine writing appears to be a bright and growing industry within the internet, though, as David Alba inserts, we cannot deny the agency of humanity within its creation.

Machine writing can be considered a part of the near future; something that not everyone can recognize as our present. When our class discussed twitter bots, the online presence of those bots on social media can be a progression to other platforms. As I think about it, these twitter bots have individual styles that determine whether they will be successful on the twitter platform. There are Existor bots which provide conversation in chatrooms and have their own spark of individuality. I feel that as time progresses, machine writing as a form of social media presence will undoubtedly increase. Their purpose may expand and eventually they’ll manage their own accounts and can message as well as speak to others. For example, an AI will have its own Instagram account and post pictures about whatever it wants as well as use the messaging component on Instagram. It will be able to interact with other users and move across other platforms, in which case it will behave like actual people and hold more than one social media account. If this were to happen, I believe that the bot will have the agency to post according to its individualistic nature. In a sense, I feel like it’s already happening and all it needs is for AI to be taught social practices by their creators. Tay can be considered an example of what can happen when there is a lack of knowledge regarding human history and what is deemed socially acceptable. Her release was premature and creating a bot that should solely learn from others on the internet wasn’t the smartest of ideas. Hopefully, that will be avoided in future ventures and the bots will be unleashed on the internet with some preset knowledge.

Lingua Franca by Baiden Pailthorpe discusses the mechanics behind the Google Translate algorithm, but also delves into the cultural and social implications of language becoming “increasingly ‘detached from the living human body, and as a form of statistically calculated structural wordplay that might not sound like anything… as it is mediated only between machines”. This thought springs from how Google Translate works– it searches through a vast system of existing translations and utilizes those to either directly translate a new request or, for less common translations, it will use English as a “pivot” between the two languages since there are no direct translations in the database. The article discusses a novel that was generated by running the text of George Orwell’s Nineteen-Eighty-Four through every language option on Google Translate, one paragraph at a time. A media theorist calls the surprisingly lucid result “statistical machine art”, an idea that I find intriguing. This “artist” utilized somewhat random machine processes (random in the sense that he had no idea what the end result would look like) to transform a hallmark of dystopian literature, really a hallmark of 20th century literature and culture, into something entirely abstract and disjointed from both the original work and from any sort of artistic tradition of genre or classification that exists as a category of “art” today. Perhaps it is this sort of digital, technological innovation in the areas of the humanities that will give us the next Renaissance, so to speak- with the ability to search and compile the majority of the human literature that has been produced since the beginning of our time, would it not be possible to produce a sort of “codex” of human literature that combines all the most quoted, all the most dicussed, etc phrases and fragments from every classic writing ever? I am interested in seeing where this burgeoning literary innovation of algorithmic exploration will take us.

Vampire/zombie children approach. No, they haven’t been bitten or contaminated with some terrible ancient virus, nor is it Halloween any longer. Rather, they are the possible effect of the marriage of two new technologies.
Meet the RealCare Baby 3. http://www.realityworks.com/products/realcare-baby
Baby-like? Yes. Terrifying? Perhaps. Currently, the RealCare Baby 3 is being used for educational purposes focused on safe sex, infant abuse prevention, and childcare skills. They track and report how you handle them, crying if something goes wrong and cooing when something goes right. Could it be that these plasticized faces reveal the frightening future of the grief business?
With applications in business, finance, and customer service, chatbots are branching out and becoming a part of every day life, and now, part of our life after death. Eugenia Kuyda, a Moscow based engineer, recently lost her best friend, Roman Mazureno. By plugging in all of Roman’s texts and social media posts into a neural net, she created a “grief bot” that came close to feeling like real interaction. @Roman is supposed to be therapeutic, a memorial of sorts that helps friends and loved ones recover from their grief. As a “grieving tool” those who interact with @Roman say that this “shadow” has helped them move on and keep the memory of Roman alive. Might this grief bot not help others? Many who interact with @Roman have positive reactions.
Animated dolls that act like chatbots, responding a certain way based on input, have been popular for years, mostly exploited only by the entertainment and toy industries. Here, we see the opportunity for the reanimation of a loved one, a physical embodiment of their digital life. Responsive dolls like the RealCare Baby could be easily equipped with the small computer and microphone necessary to engage in “chat”, enabling parents to read that one last bedtime story, to apologize for the angry words they didn’t mean. All for a small fee, of course.

Looking at Facebook’s “Trending Topics” automated algorithm and thinking about its present and future influence on humanity, I feel a strange sense of awe; this cannot be helped. Like it or not, machine writing is here and here to stay in our lives till death do us part! Facebook engineer Jonathan Koren, one of the many persons behind the trending topics algorithm describes it and most computer algorithm’s like it, “crappy,” and honestly, there doesn’t seem to be much hope for improvement of that description. The point is not that time and advancements in A.I. technology do not have the potential to make machine writing sound more and more human, but that we humans have been and are continuing to fail to recognize the blurring and removal of the boundaries we once set for ourselves upon all forms of writing in general. We no longer have specific guidelines on what constitutes art, a novel, or even the news, and in place of attempting to re-establish those boundaries ourselves, we have tried to fashion computer algorithms and A.I.’s to do it for us; but again, this is a human problem, not a machine problem. In fact, I would argue this problem has existed since the dawn of the written word. The question starts like this: what distinguishes one piece of writing from another? what makes X writing better than Y? Our inability to agree or come to a consensus of judgement on practically anything in this world has inevitably lead us to eliminate objectivity altogether. In it’s place, we now say that “beauty is in the eye of the beholder,” meaning judgement of anything is solely subjective, but the consequence of this is that it opens up the possibility for ANYTHING to be considered beautiful or great or news or etc. And when everything is super, nothing is. Adding machines to the mix only further blurs the already blurred boundaries we currently have, suggesting a dim future for any real improvement in programs like Trending Topics to become anything more than “crappy.” The limitless possibilities and freedoms which machine writing programs now bring and will continue to show us in the future simultaneously enslave us and destroy any and all senses of meaning in writing.

Throughout this course, there has been much discussion on the topic of artificial intelligence and where the development of technology is in our present day, as well as where it might it be heading to. One of the most used AI assistants today is Apple’s Siri. Siri functions as a day to day helper with tasks such as finding a local restaurant, looking up the definition of a word, or just telling Siri random things to see what it’s reaction will be. I believe the last of these described functions is what appears to be the most interesting of the future of AI and what groups, such as those that invented The Listeners in Amazon’s Echo, are striving to advance. These bots that you are able to have a conversation with bring up numerous questions on what the future relationships of humans to the bots will be. At our current period of technology, it appears these conversation type bots are not all yet too convincing when spoken to. The Existor bot can carry a basic and coherent conversation for a bit of time but once questions become a little more complex their responses get jumbled and unintelligible. Yet, the speculation comes into mind when the advancement of such bots seems to be not so abstract when thinking about the future of technology. Films such as Her and Ex Machina allows us to further question where such advanced AI’s will be in relation to us humans. Will there come a point when Siri becomes a Samantha like bot and the lines between assistant and friend (or lover) become blurred? Will this bot be smart enough to fool a human into coming off as conscious such as in Ex Machina. There are already algorithm based bots that are able to write impress pieces of work such as poems, novels, and screenplays that we believe requires that human touch to come off as impressive and meaningful. Yet, if we cannot tell the difference when reading some of these works, who’s to say that we would be able to tell the difference when having an actual conversation with such a bot. The advancement of such technology to me seems like a frightening prospect. If bot technology exists today that can do something just as good, if not better than a human, who’s to say the technology of tomorrow won’t further that gap in the necessity of human thinking?

http://www.csail.mit.edu/deepdrumpf
To capitalize on the current political situation, I have chosen the Deep Drumpf twitter bot who attempts to replicate Donald Trump’s twitter posts, most of which are laughable to most as they already are. What is most surprising about these tweets is that they seem eerily close to what Trump sort of says, and the fact that Trump often has non-sensical tweets to begin with makes them even funnier and harder to distinguish from them.
For a robot to replicate a celebrity’s tweets is nothing new, but when it is a political candidate a whole new plethora of questions are asked. Are people going to misconstrue what this bot writes for reality? Will this bot be a better candidate than Trump, or for any t=other candidate that runs in the future? Cana robot run for president? I think that if a robot can compile opinions of enough presidents, or data about the countries problems and issues, and then created an average of what to do for the solution, it could create a very awkward or different political landscape. Will people be accepting of the first robot president, or face the resistance experienced by the first Catholic, Black, or female president?
Deep Drumpf, for the future of machine writing, shows the next step after autonomous machine writing: autonomous thought/ opinions. Because if a machine can think, certainly it can be programmed, or be faulty, or naturally due to level of data input, think whatever it believes to be true. We could have machines that interpret facts differently due to experience, which in a way sounds much like humans.
Back to the bot though. By imitating Donald Trump, we have an instance of machines emulating humans for our own entertainment. I wonder what machines would write for their own entertainment. If they are capable of autonomous thought in the future, what would they find funny, or satire? What does a machine think should be made fun of, and will their be machine tv, movies, books, art in the future?
What will robots make fun of in their presidential candidates? How dumb will robot candidates be?

I will start off by admitting that I still have no idea how I feel about robots taking over the world- one part of me is really fascinated by it and would be proud of how smart we are, and the other part wants to hide under the bed forever. I will also say that I don’t think this subject can be a split black or white decision.
We’ve come a long way as humans; there are hundreds and hundreds of catalogs dedicating to selling top-tier, fun, titanium, set it and forget it products that have made our lives easier and everything under ten minutes. “Pokémon Go” has touched the hearts of everyone. The infamous childhood industry can now be with you at all times, all hours of the day, and even in the bathroom. The app is available on Google Play and the Apple Store, and after a few minutes, Pikachu will now be the one draining your battery. According to The Guardian, “Pokémon Go” has also helped gamers with their physical health. The article states, “one player walked 140 miles while collecting all of the characters…highlighting claims that the app could ease obesity and type 2 diabetes.” The common stereotype of gamers as couch potatoes is no longer.
But we also have to acknowledge that there are some complications with the furry electronic creatures. The app accessed the user’s entire Google account that included name, email, and some personal preferences. Not many where aware of this privacy collection since we often don’t dare to read the user terms and agreement for things. Though you could try to sit through and read all those words, users wished they were notified about this policy when signing up. Not realizing your information is being examined is scary, but we also try to blame the companies. I really could read the terms and agreement; but I’m also lazy and don’t care. I need to pick a team and catch them all!
Ultimately, I believe there is actually more good than evil in the future of writing and machines. “Pokémon Go” isn’t really text heavy; however, the game still mimics user responses and desires. The now/future offers human-like robots that look, feel, sound, and even taste like us. Language can and is manipulative; I don’t think there’s an actual way of setting a part human and machine generated texts. Technology is advancement, and with our creative ingenuity, we can create/simulate anything or anyone we want. Let’s get the popcorn and board the windows.

Behind every fluttering feeling of excitement in reaction to a development of AI capabilities, steeped in each determined engineer’s drive to improve upon endless hours of collaboration designed to yield increasingly competent machines proficient in a growing number of fields is a fundamental sense of superiority. The progressively ubiquitous integration of AI into various aspects of the ordinary is fueled, at a primal, basic level, by arrogance; the catalyzed evolution of AI, especially in regards to the streamlining of relatively basic reports heavily based on statistics, is allowed, and, indeed, encouraged, to continue due in large part to the fact that humans believe that those bots will never surpass human employees in their respective fields. As of yet, poetry, earthquake analysis, Tweets, and screenplays written by algorithmic programs are considered inferior to their human-produced counterparts; the idea that this may change in the future is either dismissed as an impossibility, or ignored in the face of potentially breaching the uncanny valley. Weekly recaps outlining statistical performance on a player-by-player basis are written by a bot for Rotowire.com, though the analysis of that information is reserved for fantasy experts with a pulse. The easy implication is that AI-based programs are not yet capable of the kind of in-depth interpretation of the raw data for which they are responsible, though to say that such a feat will remain impossible is shortsighted at best. NFL.com employs a bot to write weekly recaps of matchups between managers, detailing how a game between two of the millions of people playing their on their site went. The recap is complete with future opponents, gutsy calls, best performances, and points accrued thus far for every manager mentioned. From a practical standpoint, such recaps would have previously been impossible due to the sheer volume of matchups on a weekly basis; the development of the site’s program has afforded them an opportunity to give participants a more in-depth experience, but, more significantly, it has given the site access to a service not suppliable by humans. While it is comforting to think that there will always be a place for humans in various forms of writing, and likely that a human presence in something such as fantasy football analysis will persist in one form or another, it is becoming increasingly apparent that the line between what is decidedly human and what is markedly “other” grows blurrier by the day. Videos such as this are intriguing (and parodies such as this comical) until the subjects begin performing tasks with greater proficiency than their creators.