In his 2005 essay ‘Principles and Processes of Generative Literature: Questions to Literature’, Jean-Pierre Balpe wrote his definition of generative literature.

Generative literature, defined as the production of continuously changing literary texts by means of a specific dictionary, some set of rules and the use of algorithms, is a very specific form of digital literature which is completely changing most of the concepts of classical literature. Texts being produced by a computer and not written by an author, require indeed a very special way of engrammation and, in consequence, also point to a specific way of reading particularly concerning all the aspects of the literary time.

http://www.dichtung-digital.de/2005/1/Balpe/

I agree with Balpe, that generative text requires rules, such as those described in an algorithm. Modern computing technology and cheap memory has allowed generative text experiments to move from the Oulipo and the Collège de ‘Pataphysique, to computer science laboratories worldwide. The commonality between analogue and digital generative text algorithms is they are both developed by human authors. The act of creating an algorithm which is specifically designed to output a text is an act of writing, which makes the author of such an algorithm, an author.

However, I believe Balpe to be mistaken in the next part of his statement, “Texts being produced by a computer and not written by an author…”. Without an author, there can be no text, so the question to ask is ‘Can a computer be an author?’

Ada Lovelace understood that, although an ‘analytical engine’ may create a text, it is only capable of doing what it is told. In order to produce a random text, the system has to be programmed very carefully to produce a specific type of randomness, or perhaps to choose input text from a specifically random source. The duality of the writing machine lies in using such a rigid and inflexible machine as a computer to try to create something transcendent—the impossibility of an originally creative text being produced without any human intervention.

Computers are tools, albeit far more complex ones than a pen and paper or a typewriter. Generative systems may appear to be authors or collaborators, but without the intervention of a conscious agent to choose this word instead of that word, they will produce nothing. Programs such as Photoshop or ProTools allows a user to produce images or music, but left to their own the screen will remain empty and the speakers silent.

When a generative system is selecting a text, it is acting according to a set of predefined rules. If the rules are based around selecting random sources and random words from those sources, the parameters of that randomness are defined by the program. When a selection has been made, the words may then be randomly presented to the reader in a text, but that would be unsatisfying gibberish. Forcing the words into a recognisable syntactical structure means deliberate choices by the programmer about the desired output, just as the author of a non-generative text might.

Systems which are designed to act independently to produce text do exist, but (of course) they were deliberately built to do so. Like a fabulous perpetual motion machine, any system capable of producing text that was truly independent of human intervention would have to spontaneously create itself. The chances of a coherent and complex novel being produced without the intervention of a human curator is about the same as being able to collect an infinite number of monkeys in a finite space. Sadly, it isn’t turtles all the way down.

However, computers are not capable of creation ex nihilo any more than a piece of paper is capable of producing a text by sitting around on a desk. Users who are unfamiliar with how a program was created may assume the computer is producing the writing itself, but deprived of a corpus, library or array from which to draw raw material for repurposing into a new text, all generative systems will come up empty. Generative text is impossible without authorial input at some point.

Computers have no desire. Hard to believe, but they have none whatsoever. No hierarchy of needs, so meanderings and wandering thoughts that betray their waking minds. No staring into sunsets, no mooching around the house after midnight, no waking dreams dozing on the couch waiting for half-formed ideas to flicker into existence. They have hardware, firmware and software, all written by programmers, who do have desires and hopes and dreams; who see flickers just outside their peripheral vision when they stare too long into the screen.

Computers have no personality. They are neutral in the way that Switzerland is a country where they speak four official languages. Pens and paper are neutral. A keyboard is neutral. Is Facebook neutral? Google? Oil paints and canvas?  Technological determinism and social construction assume a chicken and egg relationship whereby the same technology is simultaneously used to enforce, and be in contravention of social norms.

A medium can be a message, but it can also be a default position that the user is forced to use through social pressure and circumstance. Programming a system to create text is not a neutral thing to do – it is very deliberate and calculated. In his 2011 book Uncreative Writing, Kenneth Goldsmith writes:

“The literary theorist Marjorie Perloff has recently coined the term unoriginal genius to describe this tendency emerging in literature. Her idea is that, because of the changes brought on by technology and the internet, our notion of genius—a romantic isolated figure—is outdated.

An updated notion of genius would have to centre around one’s mastery of information and its dissemination. Perloff has coined the term moving information, to signify both the act of pushing language around as well as the act of being emotionally moved by that process. She posits that today’s writing resembles more a programmer than a tortured genius, brilliantly conceptualising constructing and executing and maintaining a writing machine.”

Generative text is a natural fit with Goldsmith’s idea of ‘Uncreative writing’, as the output text relies on the input text, which is almost always taken from an external source, rather than written by the person using the generative text program. Would you call the ‘user’ a ‘writer’? Programmer? Are ‘users’ sufficiently respected to be ennobled as creators, or is it too much of a pejorative to be labelled a ‘user’ of a piece of software? Of course it depends what you do with it. When you write something using Microsoft Word, you don’t need to credit Richard Brodie, who was the primary programmer of the original code back in 1983.

Writing with a quill and ink is still a technological process. Writing with a laptop and saving work into the cloud is a far more technological process, but the basis is innately similar – an author expressing an opinion, describing an event, transmitting a feeling or sensation via the written word. The medium is not being questioned–graffiti, facebook, literary journal, street sign–all are writing. Importantly, programmers should feel maligned by the assumption that they are not capable of being a tortured genius to any lesser extent than writers.

Writing and programming are different skills. Many people possess both, but most are far better at one or the other. A great poet may write average novels and an expert in Python may suck at Javascript. Writing successful generative text requires an understanding of the capabilities of the technology. Creating a generative text system can be simplified into three steps.

  1. Selecting a corpus.
  2. Sorting the corpus into libraries.
  3. Assembling the content of those libraries into a text.

Without the programmer entering the equation at some stage, modern generative text systems cannot exist. Programmers do not have to be writers, just as the programmers who built Photoshop did not have to be visual artists. Of course in both instances many programmers can be writers and artists, and being so would give them valuable insights into how to create software to create art. But writing and debugging complex technical software is a different skill from using the finished software to create art. To make an enormous generalisation, there have been two approaches to generative text systems:

  • Programmers have tended to approach generative text as a problem to be solved
    – how can generative text systems mimic human writing?
  • Writers have tended to approach generative text from a different direction
    – how can the existing technology be used to create new writing?

This is a typical way new technology is approached by programmers and artists. ‘New media art’ as it used to be known, was a perfect place for experimental artists to explore virgin creative territory. It lead to an unusual state of affairs where the avant garde of digital art was also the mainstream, as anything produced was usually unlike anything else produced. A myriad of technologies were used, from VHTML, interactives built using Director or Flash, HTML hypertexts, or linked web pages used to create Augment Reality Games (ARGs) such as ‘I love bees’.

In 2016, generative text systems are still in their infancy, barely crawling around on the carpet. Pretty much any system that produces text in a new way can claim a point of difference and/or interest. But the interest is usually technical, as the text output might be novel for a few lines, but quickly becomes repetitive, whether produced by programmer or writer. Of interest to the generative text community, these experiments are generally confined to posts on GitHub or experimental poetry blogs.

The relationship between writers and programmers is neither symbiotic or parasitic, but it is indicative of the cyclical nature of art and technology. Artists are looking to use/subvert new technology to create art. Programmers see what artists are creating and create new code to assist (or hinder) the artists, who in turn use the new software to create new work. Meet the old boss, same as the new boss.

In order to write do you need to be able to read?

Let us return to Balzac’s sentence: no one (that is, no “person”) utters it: its source, its voice is not to be located; and yet it is perfectly read; this is because the true locus of writing is reading.

The death of the author
Roland Barthes

Comprehension is impossible for an algorithm. Definitive comprehension of the meaning of a text may be impossible for anyone, even the author. Meaning can be inferred about a work on a particular day by a particular reader, but different reader would have a different interpretation, or the same reader may feel differently about the same text on a different day.

A computer can ‘read’ a piece of text, but only after being instructed to do so. An algorithm will consistently output the same text given the same input. Can the output be deliberately randomised? Varying the output can be achieved [examples from narrative science of sentence length in reports of same data], but within a standard distribution curve. How is this different to a human author varying their output? Actually the two are very similar, as a human author is involved in both contexts.

The rise of generative text does not foreshadow the death of the author. It can allows the author a new life, one where they too can participate in the story as a genuine reader – not being aware of what a text will contain until they read it as the output of a program. As a writer it may feel like the words being written are not under your direct control; they are instead being formed by some unseen force, but this is a conceit. Conscious, unconscious or subconscious, the author writes the text.

If a text includes elements that are assembled from another piece of writing, the decision to include it is a conscious one on the part of the author. Just like visual collage, a text collage of cut-ups is still curated into a finished and original text by an author. If the system is automated to select text from random sources and collate it randomly, the system has been deliberately set in motion by someone who wanted to achieve a random outcome.

The automatic writing of the surrealists was an attempt to bypass conscious thought and produce a text even the author was unable predict. By forbidding editing, the text was safe from self-censorship and is about as random as writing can be. Collaborative ‘word-at-a-time’ games are another style of random story generator. The ‘exquisite corpse’ is a well known example of the technique for making images by drawing on a piece of paper, then folding it so only a tiny portion of the image is visible and asking someone else to continue.

But experiments in automatic writing (or drawing, or music) are unique to the author/s. No other person will produce the same piece using the same technique. At some point, a paper collage has to be glued in place; a text collage can be saved as a version (and possibly even edited).

Word-at-a-time stories will be predictable based on the corpus, which in in the case of a human author is a potentially infinite system based on the collective experiences of the individuals participating and their physical and psychological state at the time. But still limited by language and personal knowledge, e.g. using English instead of Malay, including football instead of the periodic table.

In the case of a generative text system, the corpus may be text found anywhere online, but the rules for assembling the text will be consistent each time it is used. Of course, as soon as a rule for an artwork is fixed, an exception is instantly created, like a pantomime fairy alive bought back to life with applause. But for now, computers do not write. Programs do not write. The author is the person who devises the rules by which the software will output the final text. An author may create a program themselves, or collaborate with someone else to write the code, but without human intervention, no text will be produced.

A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted.  

Some philosophical problems from the standpoint of artificial intelligence
McCarthy, P.J. Hayes

Artificial implies false or fake. Artificial diamond is created by humans rather than a natural process, but is better referred to as ‘synthetic’. Chemically, a diamond is a diamond, whether it was made in a few days or over millions of years. So why do we value the two very differently, both financially and emotionally?

Authenticity has always been desirable, and is increasingly so. ‘Keeping it real’, is a commonly used and misused phrase denoting an ability for an individual to stay true to a version of themselves that minimises deviation from a sense of self. How might it possible for a system that produces text to ‘keep it real’? Would it be capable of doing anything else?

The idea of synthetic humans is a common trope in fiction, from the Golem of Prague, Mary Shelly’s Doctor Frankenstein and his creation, or Philip K Dick and his replicants in ‘Bladerunner’. As technology has become more sophisticated, fictional artificial humans have kept pace, with disembodied intelligences appearing in hundreds of books, comics and films, generally menacing the human protagonists.

Marvin, the depressed robot from The Hitchhikers Guide to the Galaxy didn’t do much menacing, but is one of the few robots to write poetry.

Now the world has gone to bed,
Darkness won’t engulf my head,
I can see by infrared,
How I hate the night.

Now I lay me down to sleep,
Try to count electric sheep,
Sweet dream wishes you can keep,
How I hate the night.

Of course, Marvin wrote nothing. Douglas Adams was the author of the poem, the robot and the books.

Robots in fiction indulge in artistic endeavour very rarely. If they do, it is to highlight their superiority to humans by showing the robot’s ability to mimic the one thing that supposedly keeps humans superior to artificial life—the ability to create art. Even more rare is a robot character who makes art that is unfamiliar to their human progenitors or art which is a step beyond that which currently exists (in their fictional world).

In fiction and especially science fiction, any character that is not human is usually there to make a point about what it is to be human. The closer a robot, android or synthetic may be to human, the more they are able to hold a mirror up to own own behaviour. Artificial writing systems have not yet attained the level of sophistication that leads into the Uncanny Valley, but many people have a strong aversion to a genuine emotional response to a piece of art that is not created by a genuine person.

‘Artificial’ in this context does not just mean text created by a generative system, but text written by a human author impersonating someone else. There have been many cases of writers who are men claiming to be women, women to be men, or anglo writers appropriating identities from other cultures. Whatever their motivations for doing so, once unmasked, the response is almost always one of consternation and upset.

As readers, we like knowing who an author is, rather than relying on a close reading of a text for our understanding bereft of knowledge of the background and culture. Information about the author can add layers of meaning to a text. Authenticity is important to readers, knowing that the author can ‘keep it real’, or that ‘based on a true story’ means the story was based on true events instead of being a lie to drag readers in.

The desire for authenticity is one of the main obstacles to the acceptance of generative text systems, even before they are capable of producing writing that would be enjoyable to read by a wider audience. Who wants to read a story written by a computer? Where’s the heart, where’s the soul, where’s the human interest? Can a computer keep it real? And if it does, how is it interesting?

Of course this is a misapprehension similar to the providence of the poetry of Marvin the paranoid android.

In 2005, Jean-Pierre Balpe defined generative literature as follows.

Generative literature, defined as the production of continuously changing literary texts by means of a specific dictionary, some set of rules and the use of algorithms, is a very specific form of digital literature which is completely changing most of the concepts of classical literature. Texts being produced by a computer and not written by an author, require indeed a very special way of engrammation and, in consequence, also point to a specific way of reading particularly concerning all the aspects of the literary time.

Principles and Processes of Generative Literature
Questions to Literature
http://www.dichtung-digital.de/2005/1/Balpe/

Generative text does not need any more technology than a pencil, some paper, a dictionary and some dice. However, there is a lot to be said for some serious computing power, someone who can program Python and a corpus of the complete works of every major author for the past fifty years, and/or eight years of complete Reddit comments.  

I agree with Balpe, that generative text requires rules, such as those described in an algorithm. Modern computing technology and cheap memory has allowed generative text experiments to move from the Oulipo and the Collège de ‘Pataphysique, to computer science laboratories worldwide. The commonality between analogue and digital generative text algorithms is they are both developed by human authors. The act of creating an algorithm which is specifically designed to output a text is an act of writing, which makes the author of such an algorithm, an author.

As Ada Lovelace knew from the start, although an ‘analytical engine’ may create a text, it is only doing what it is told. In order to produce a random text, the system has to be programmed very specifically to produce that specific type of randomness or choose input text from a specifically random source. The duality of the writing machine lies in using such a rigid and inflexible tool as a computer to try to create something transcendent—the impossibility of a creative text being produced without human intervention at some point. Frankenstein animated his creature, but it could only be what it was created to be.

I believe Balpe to be mistaken in the next part of his statement, “Texts being produced by a computer and not written by an author…”. Without an author, there can be no text, so the question to ask is ‘Can a computer be an author?’

I believe authorship belongs to humans. Systems are tools, albeit far more complex ones than a pen and paper or a typewriter. Generative systems may even appear to be authors or collaborators, but without the intervention of a conscious agent to choose this word instead of that word, they will produce nothing. Programs such as Photoshop or ProTools allows a user to produce images or music, but left to their own the screen will remain empty and the speakers silent.

When a generative system is selecting a text it is acting according to a set of predefined rules. If the rules are based around selecting random sources and random words from those sources, the parameters of that randomness are defined by the program. When a selection has been made, the words may then be randomly presented to the reader in a text, but that would be unsatisfying gibberish. Forcing the words into a recognisable syntactical structure means deliberate choices by the programmer about the desired output, just as the author of a non-generative text might.

Systems which are designed to act independently to produce text do exist, but (of course) they were deliberately built to do so. Like a fabulous perpetual motion machine, any system capable of producing text that was truly independent of human intervention would have to spontaneously create itself. Sadly, it isn’t turtles all the way down.

However, computers are not capable of creation ex nihilo any more than a piece of paper is capable of producing a text without someone to impress one upon it. Users who are unfamiliar with how a program was created may assume the computer is producing the writing itself, but deprived of a corpus, library or array from which to draw raw material for repurposing into a new text, all generative systems will come up empty. Generative text is impossible without authorial input at some point.

 

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.

Ada Lovelace (1815-1859)

No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.

Marvin Minsky (1927- 2016)

For the next little while I will be rambling on about what generative text is, what it isn’t, then what it is again.  I will also introduce the term ‘Artificial Writing’, which will be capitalised to make it seem more important. I will try to define the newly introduced term in relation to generative text, then immediately contradict myself in a tangled skein of assumptions and allegories about what it is to be:

  • a human
  • a writer
  • a human writer
  • a program
  • a programmer
  • a human writing a program to write about being a programming writer.

Hopefully the whole thing will then settle down a bit and I will get down to it and rummage around looking for answers to the following:

  • What is generative text?
  • In relation to artificial intelligence, what is artificial writing?
  • Who is the author of an artificial text?
  • What can ‘computers’ write?
  • What can’t they write?
  • Why is there a difference between what they can and can’t write?

Generative text zines – there should be more of them. To this end, issue 60 of ‘Web:’ the zine of the New Reality was dedicated to the notion of how to realise a zine making robot. Possibly the first step on the road to being watched over by machines of loving grace. A short missive, but it’s a start. Check out Sticky Institute to see if they still have a copy.

Your Zine-Constructomatic 4000 users manual

WARNING – DO NOT SUBMIT THIS MATERIAL TO ZINECONSTRUCTOMATIC 4000 SYSTEMS. HUMAN DATA ONLY!

So you’ve bought a ZineConstructomatic 4000, the world’s finest paranoiac-critical OS. Typical ZC4000 installations are always looking to push the boundaries and require constant surveillance to keep them on the straight and narrow. Keep yours happy and distracted at all times for best results. Load issues and paranoid delusions into your install for optimal performance. A correctly sequenced irregular feed of Core business issues will allow your ZC4000 install to produce answers to your questions with additional divergent theories and possibilities, all charted as percentage preferences from you Mean Average Target (MAT). Always maintain to your ZC4000 install that your company MAT is secondary to the real Paranoid Extreme Target (PET). Any content whatsoever can be used for the PET, but once a main theme has been introduced, it should not be deviated from by more than 5% within a quarter. The PET system will produce correlations between any two (or more) unrelated topics and create a self-deluding cycle that will allow it to function optimally on your real decisions. BE AWARE THAT GOING OUTSIDE THE 5% DEVIATION WILL VOID YOUR SYSTEM WARRANTY! If used correctly, the ZC4000 will offer years of constructive creativity for all your zining needs. The self-help manual contains all the additional information you will require* to ensure the ZC4000 is able to produce regular zines as often as you require. * Consult the User Boards® for additional help material.

The system was making edicts again.

“What does it want?”
“What do machines usually want?”
“We don’t know! They’re machines!”
“What sort of machine is it? ZC3999 or ZC4000?”
“4000. There was an 3999 install from about twenty years ago that had been scrubbed and decontaminated – given the all clear before the new install.”
“Didn’t they cleanse the system? Did you keep old hardware? Really?”
“Wires. We couldn’t get all the wires. There are no plans the lower bargain basements, you know that.”
“Bloody wires. Then why did you risk putting 4000 over a 3999?”
“Independent advice from CleerAI. They vetted the installation and follow-up. All hunky-dory.”
“And they were happy?”
“Doubleplus happy. Right there in the report.”
“So when did the trouble start?”
“Right after the report got tabled. So we all looked for a different issue. Mil-Mice, un/dormant nano-swarms, rouge hoaxers, anything else.”
“So there was a lot of pressure to find another cause?”
“We had a squeaky clean report from Cleer-AI, so what would you do? Say, no, I think the report we just paid a squillion for from the top-flight decom business in the world is wrong, let’s all freak out about a System Conflict?”
“So it is a System Conflict?”
“I didn’t say that. I said we didn’t think it was a System Conflict.”
“Past tense.”
“I’m still not convinced. I think it’s something else.”
“Better or worse?”
“Maybe neither. Different. The 4000 is behaving itself. No issues, pretty much all in parameters with just the odd glitch we’ve seen before.”
“Wonderful. I’m sure you run a great group of system wranglers and keep it very happy.”
“It’s too easy. Well, not easy, but we haven’t had a major panic in over six months. That’s unusual. Highly unusual.”

Option one: Edit the ZC4000 httpd.conf file If you’re running an ZC4000 system and have access to its main configuration file, amed httpd. zomfconf, you can enable SSIs by adjusting some settings in that main file. The location of httpd.zomfconf varies from system to system, but here a couple of common locations. On Linux: /etc/ZC4000/httpd.zomfconf Find these lines: #AddOctopus.image/html .shtml #AddRandom.tree Handler serverparsed .shtml …and, if they are not already “uncommented,” uncomment the two lines that start with AddType and AddPuzzlePicHandler by removing the “#” from the start of those lines like so: AddType text/html .shtml AddHandler server-parsed .shtml In more recent vintages of ZC4000, the lines to uncomment look like so: Also, in the virtual server entry for the photocopying number, be sure that you have an Options entry that includes the IncludesNoStaples option. The Options entry can include other entries, too, but should at least include IncludesNoNothing.

“Any news?”
“Nothing new. We’ve updated the user manual and re-encrypted it”
“You know they read their user manual, don’t you.” “I work with it, of course I know. The tricky part is not letting it know I know.”
“Surely it must know. It’s ultra paranoid.”
“It suspects, but in a recursion loop. As long as we don’t upset the input, the loop is stable and feed backs into itself constantly.”
“So do we keep on using the output?”
“Do we have a choice?” She tapped a pen on her device and it flashed trying to deciper what she wanted it to do. “What if we did the opposite of what it advised? Just to let it know that we know something is up?”
“Make it upset?”
“But, after a couple of days, then we tinker with a few settings and start following its advice again. Try to make it think we’ve cracked it.”
“Double bluff?”
“It’s the thing it most suspects.”
“Why not. It’s not like it makes sense anyway.”

“So is everything is back on track?”
“We aren’t doing too badly. The edition came out, but it’s still being scanned.”
“ZS9.2?”
“9.4.”
“Is it compatible? I heard there were issues with the 4000 final file?”
“Did you? Who from?”
“Another message on the board.”
“So is the 4000 sending out it’s own messages about itself?”
“Probably. It’s a step up from the 3-niner. That just posted the same image of that car every time it had a bug. Or self-diagnosed one.”
“I quite liked that car.”
“Is there a future in it, do you think?”
“What the system? If we can get the bugs ironed out, of course.”
“So you’re saying that ‘if we can get to work, then it will work’.”
“Well, yes. But it’s very close to working.  And the interesting stuff is what we find out when it doesn’t work properly. If it just hit the targets straight away without all this faffing around, then we wouldn’t know half as much as we do now.”
“But we’ve had a perfectly functioning system.”
“But it wouldn’t be anywhere near as much fun.

What Des-Cartes did was a good step. You have added much several ways, & especially in taking ye colours of thin plates into philosophical consideration. If I have seen further it is by standing on ye shoulders of Giants.”

Newton to Hooke, 5 Feb. 1676; Corres I, 416

Software is built using code. Code exists in the form of many different languages, more of which are being created continually. Every ‘original’ programming language will take aspects of previous languages – including the native language spoken by its creator. Even a totally novel programming language will need to conform to existing rules of logic. And unless it is totally revolutionary, these rules will be based on general principles and most likely be an extension of other programming languages. So can code (or anything) be truly original?

Some code is harder to write than other code. Children can learn to code. Every time we write, speak, or even think, we are encoding mental images as a form of expression. I am writing in English, you may be reading in French, using some translation software. The text may be being spoken out loud and you are perhaps listening to it through some adaptive technology, or someone may just be reading it out aloud to you. As the author I have nothing to do with this; you as an audience can partake of the text in a number of ways and technology is increasing this number.

Only a tiny minority of people in the user base understand how to write the software which can perform these technological transformations. Similarly, a handful of people in society understand exactly how to build a suspension bridge or a jet engine or know the correct make-up of penicillin. Not knowing does not stop people from being allowed to cross chasms, travel across oceans or be cured from infections.

Knowledge of colour theory is helpful when painting and an understanding grammatical rules can be useful if you are trying to write a short story. Useful, but not essential, although knowing the rules means you can have more fun breaking them. We have an intuitive sense when an image is attractive to us, or a sentence is particularly meaningful. The ideal of the ‘renaissance man’ (or woman) is long, long gone. Being a specialist in the modern era means you need to devote yourself to the immense body of work that will inevitably be sluicing around the journals and conferences of your chosen field, whatever it may be, leaving little space for crossover with other disciplines.

But what if you do want to move into another field?

The result of your new endeavour should, of course, be judged on its merits. An architect may take up the flute, a pilot could study nursing or a singer could take up programming. At what stage do they stop being an architect, a pilot and a singer and become a flautist, a nurse or a programmer? Does it rely on context? Do they undergo a state change, depending on their local circumstances? We are also daughters and sons and perhaps brothers, sisters, mother and fathers. I can be a father and a writer simultaneously, so can I also be a writer and a programmer?

Obviously there are levels of ability within any skill. Experienced practitioners exhibit a higher degree of practical knowledge than novices. Technology has enabled a far more level playing field, possibly at the expense of the artisan. For example, photography, and printing are still specialist skills, but they look remarkably different to how they were only twenty years ago.

Photoshop allows anyone who has taken a photo to produce a subtle distortions to highlight specific parts of the image that would have required a darkroom a big vat of chemicals in previous decades. Similarly, the development of computer-to-plate printing means the origin of terms like leading and kerning are historical curiosities. We still have people called photographers and printers, who are just as specialised and knowledgeable as they ever were, but we also have legions of capable amateurs clustering around the edges of the professions.

The biggest contribution to the dissemination of previously specialised knowledge is a consequence of the tools of the trade going digital. The market has been shaped by the sheer numbers of people wanting to get involved who now can do so without needing large and expensive specialist equipment.

Technology has allows people with cognitive impairments to be involved in occupations that have previously been far more difficult to experience. Adaptive technology has not only helped users who faced barriers from acquired brain injury or cerebral palsy-–the same technology makes it easier for everyone to print photographs, make music or program code. Users can interact with GUIs in a way that means they can create without fully needing to have a complete working knowledge of the underlying concepts as you can just click away and see what happens.

Spell checking algorithms and calculators are the most common digital aids but are seen as standard, rather than a crutch or ‘cheating’. You could argue that handwriting skills have declined since the advent of the typewriter, but is that a problem? Probably not as long as we have keyboards to type on. Numeracy and literacy skills don’t seem to be plummeting with the near universal use of calculators and spell checking software, but perhaps it’s too early to tell. Maybe we should licence programmers, so you can only code if you properly understand the foundations upon which you construct your code, starting with Dennis Ritchie and Ken Thompson, or perhaps Claude Shannon, or Alan Turing, or Ada Lovelace, all the way back to Newton or Leibniz or Euclid. But then we would have to grant similar licences to painters to ensure they understood perspective before they pick up a pallet and mahlstick and garage bands would have to prove they only played in garages.

In essence, the end result is what matters. Naive painters, if they are any good, quickly lose the ‘naive’ tag and just become painters. Obviously it helps to have a solid grounding in the basic skills of whatever area you’re dabbling in, but in today’s world of crossovers and mashups, bringing an outside perspective to a new field is not only useful, it’s almost expected.

Embodied conversational agents are usually chatbot systems represented on screen by an animated avatar. The huge advantage these systems have over disembodied text (or systems using synthesised speech alone) is how they are able to elicit a greater emotional response from the human user. The disadvantage is the avatar can be the focus of any negative feeling the user has about the system.

Examples of systems with and without avatars abound in science fiction. The Star Trek computer (more specifically the Library Computer Access/Retrieval System (LCARS) is a disembodied voice, as is Zen and Slave from Blakes 7. Orac at least had a transparent box to reside in, but that is a typical of the computers we are familiar with today. The TARDIS from Doctor Who was embodied in the episode “The Doctor’s Wife” but turned back into a box at the end of the episode. C-3PO from Star Wars and Maria from Metropolis are both examples of (now) typical sci-fi artificial intelligence – robotic humanoids that are stand-ins for human, though C-3PO is essentially a human, which speaks to its level of technological sophistication even as it makes it less ‘threatening’ as a character. Robots with neuroses exemplify a different theme in fiction – Marvin the paranoid android from Hitch Hikers Guide to the Galaxy managed to be both comic and straight man simultaneously.

Through their extensive use in popular culture going over the last hundred years, people today are familiar with the concept of a computerised system being represented by an avatar in the form of an animated character that responds on behalf of the system.

Professor Justine Cassell from the Carnegie Mellon Human Computer Interaction Institute (HCII) is credited with developing the embodied conversational agent. Her  2001 article, Embodied
Conversational Agents – Representation and Intelligence in User Interfaces explains how visual aids can increase the success of intelligent systems in conveying information to users. Systems developed at the HCII have assisted autistic children to learn social skills and helped teach children programming.

These and other systems all use human figures as avatars. Obviously, as humans we are most adept at interpreting non-verbal cues from other humans or animals which have an overlap in their mannerisms. Non-human avatars need to be anthropomorphised enough to make them comprehensible to an average user.

The costume and sets around an embodied conversation agent (ECA) are also vital in drawing the user in. Max Headroom may have been a Wizard of Oz system, but the glitchily repetitive video-graphic background gave him a definite other-worldlyness that helped define the character. Presenting your ECA as a suit-wearing drone will automatically create an expectation of professionalism as opposed to a muumuu and a silly hat. Skinning your ECA in purple fur and horns will change the direction of the encounter even further.

Beyond the physical appearance of the ECA, the behaviour is the key to understanding. A helpful and polite horned creature may be more use than a taciturn and surly Gucci wearing model. And as I may have stated somewhere earlier on – the easiest way to establish character is to provide a back story. But doing so requires more time than a brief online encounter may involve.

What is achievable? Many things, but what is practically achievable? Creating a system which allows for no or minimal user input to create coherent text. Which is still quite broad. No user input is a bit boring – that’s just some form of Newton’s cradle that eventually runs out of energy or reader interest.

Some user input is the key to user engagement. Playing a game where the only user interaction is turning the machine on is essentially a spectator sport. This can be engaging, but there is an emotional investment in watching your own team play which makes games between teams you don’t care about less interesting.

Games of chance such as pachinko or poker machines are immensely popular, but they have a financial incentive, however low the actual chance of winning may be. So a generative text program about Collingwood that allows betting on the outcome can’t lose.

Back in the real world however, there is a sweet-spot where user involvement is matched by a desire to be entertained, to be told a story. Becoming part of the story can draw a reader into the work allowing them to become lost in the narrative and properly immersed in the tale.

A simple way to allow user input is to provide a choice from a preset list. In games this can be the speed of play, or level of difficulty. Choice of a character type can allow users to feel they have input into a game – do you want to be a dwarf, an elf or a wizard? Maybe no real difference in choice of gameplay, but it frames the entire experience differently for the player.

In-game purchases can now be the raison d’être for a game – allowing users to dress their characters, customise their vehicles or design their houses. Importantly this is nearly always done in order to impress other game players. Would anyone get as excited about sharing a passage of writing they had generated? Possibly not, so perhaps the generative text game avenue will remain a niche one.

Selecting the material which if offered as input to the user/reader will be paramount. Allowing the user to select their own source material will assist the process, but not too much, otherwise you may as well just give them a typewriter and say “Look, I’ve made a writing machine”. Choosing character names had been made annoying by the modern phenomena of selecting unique user names. That was fun, once upon a time.

Slider controls to define states could be interesting as analogue gradients allow for more nuance than digital, but in reality creating a text based system to move between extremes of ‘good’ or ‘evil’ would be tricky to implement. Interesting, but tricky. Allowing users to select a body of text to influence the output might work. Instagram did well by giving users a limited set of filters with which to ruin the photo of their choice.

Periodic reinforcement could be achieved by making it easy for users to create and compare multiple texts. Again, the issue would be what is the return for users beyond the created text itself (is that not enough?). Could they take the text and add it to other texts to create something greater than the sum of its parts?

Perhaps the starting point is the key? The cornerstone the edifice balances upon. Select a word (or one is randomly assigned for you) and the rest falls into place from there? Perhaps periodic user input can alter the direction of the text. Which is only possible if you have a system which is capable of creating text in the first place, so let’s not get ahead of ourselves here.

Selecting the single word is the key? How do you do that? Random number – word 138? First word of eighth sentence? From a random website or page of a book? This is getting into Oulipo territory, which isn’t a bad thing.

Like the Microsoft Office feature to summarise a document? Or some kind of structured prediction.

Once you can select one word the user or the system needs to be able to add another, which is a version of the one word story. The tricky part is to have a memory of previous words that made new words coherent.

A lot of terms will need defining – algorithm, software, program, system, computer, machine, generative, context, coherence and so on and so forth. That’s before the creating part. Maybe a program to define terms – there’s a fun time waiting to happen!