I tried Tumblr, I tried Medium, and they’re both well and good – but it’s high time I built my own space. I’m slowly re-posted my Medium articles here, because despite the nice UI and analytics, I don’t want to be beholden to them, or to feed into the general silicon valley content churn.
I also feel more comfortable writing less ‘formal’ pieces on my own blog for some reason. As part of my general move away from social media and the hyper-news-cycle, this move will also help me focus my efforts away from Twitter, too.
Cyberpunk doesn’t describe the future anymore — It describes today.
We’re not in 1980 anymore. We need to move on. Even if you don’t realize it, 35 years later, the cyberpunk vision established by Blade Runner & William Gibson is just too normal.
Shadowy exchanges of power and billions hidden offshore using complex shell companies orchestrated by lawyers & accountants? Bionic limbs and implants? Anonymous hackers & famous whistleblowers treated like traitors by their own countries? Humanity’s irreversible, inevitably cataclysmic impact on the biosphere and climate? Robots, AI, a truly global economy? It’s all real in the here and now.
The most cyberpunk aspect of it all is that people stay quiet. Modern ‘democracies’ are fallacies, but we’re depoliticised and sensationalised at the same time. Where are our priorities? We live in a group-think, outrage culture. People like Trump — ripped straight from the pages of a classic cyberpunk novel — play that culture like a fiddle. Well, as the madman himself would say, “let’s make Cyberpunk great again!”
The tl;dr is essentially that Post-Cyberpunk protagonists are “anchored in their society rather than adrift in it. They have careers, friends, obligations, responsibilities, and all the trappings of an ‘ordinary’ life.”
In post-cyberpunk, the focus is not on a detective or a genius hacker. The focus is on normal people, on the daily problems of an ordinary citizen. This way we can show all the issues of modern life at ground level: destruction of the family unit; the loneliness of the individuality cult; the urban solitude; depression, apathy & addiction to social opiates.
Neons, robots and rain are cool, but what is truly interesting is to have a look at the life of a simple citizen in the future.
Let’s quickly recap some topical/recent, at least loosely cyberpunk games:
Ruiner • Technomancer • Shadowrun • Deus Ex — Mankind Divided (The criminally underrated) Remember Me • Mirror’s Edge 2 • Invisible, inc
*coughs* Watchdogs • Dex • Gemini Rue/Read Only Memories
Technobabylon • a slew of other pixel-art point ’n’ click games.
Then there is the intimidating shadow hanging over everything:
Across the board, the above examples adhere rigidly to existing cyberpunk tropes. Augmentations, dense urban environments, hugely influential corporations and plenty of hacking. It’s trope comfort food. What might a more original vision look like?
Modern humanity tends to think of itself as extremely advanced, and it’s almost certainly wrong. Cyberpunk usually portrays tech as militarised and as augmentations to make us stronger, faster, or able to ‘physically’ navigate cyberspace. It’s possible, and fine, but we want to imagine tech being more invisible — from growing new skin to ‘ambient’ tech such as every surface being a screen. The Last Night rethinks what the ‘hi tech’ part of ‘hi tech, low life’ really means.
That’s the thing about cyberpunk. We eventually forgot we’re living in the future, because the amazing tech became invisible to us. Think about how many concepts and pieces of tech from this (NSFW!) photo you would have to explain to someone from 100 years ago:
A woman who makes a very good living making porn, who is not shunned because of it, lying in a machine designed to change the colour of her skin using precisely tuned energy emissions, showing off breasts enlarged/augmented with silicon, taking and sharing a hi-res image of herself to a global social network for her fans, writing with hashtag & mentions, on a massively powerful handheld computer capable of taking pictures & sending them wirelessly to the whole world at the touch of a fingertips.
Technology enabled this selfie, but what also makes it possible is a evolution of occidental taboos.
Technology often fuels the disruption of taboos. What can shock us now that we have immediate access to everything?
Just like GTA used to shock our parents, we’ll be shocked by what the future generation will be able to handle. It’s a generation that grew up with porn, violence, and transgressive content during all their childhood thanks to internet. Blood in their games won’t be made of a few red pixels, but will be entirely believable in VR.
Even if we can define ourselves as “desensitised”, our kids will surprise us. People need to be shocked, because people seek shock value. People need taboos in order to be transgressive, or rebel. So they will create them, going to new extremes or redefining old ones.
The Gamification/Quantification of Every Aspect of Life.
This process has already begun. Fitness trackers that measure your every step and churn out motivational slogans. Gamerscore and trophies. The ribbons, ‘likes’ and other accoutrement of social media. It’s only natural to expect that this trend will reach a logical extreme, infiltrating every part of our lives.
Imagine an augmented life where every object around you constantly monitor, analyses and tracks the calories and chemicals in your daily coffee and juice. An online bank account that rewards with you ‘Overdraft Points’ for depositing money. Interactive adverts that offer you a product discount if you beat a high-score. Basically, the addictive aspect of F2P games will be generalized to all aspect of our lives, from jobs, money, and food, to love, family & friends.
For better or worse, humans brains crave for this. We don’t seem to be able to resist this process. Will it give us more control over our lives, or effectively make them one big Skinner Box?
Humanity will not be the ultimate form of life on Earth anymore.
Our lives are already hugely reliant on sophisticated algorithms. Admittedly, the AIs helping out with our economies, infrastructure and science are only superior to us in terms of number crunching — but it won’t be long until they exceed us in a number of other ways.
What will it be to live life knowing that The System is not maintained by other humans, but by something superior, and inhuman? What will become of our evolutionary imperative, when robots can beat us in any contest, can travel through the galaxy (and beyond?) more efficiently, can be funnier? What will the next goal of humanity to be, what will become of religion once we create God/s on Earth?
In Blade Runner, ‘replicants’are feared and hunted. In post-cyberpunk they’d be happily talking you through your taxes.
The Future of Work might be the Death of Work.
Work, as we know it right now, will die soon. One day, our grand-grand-kids will think of us spending our lives in factories & offices just like we think of our ancestors farming all day, just like they thought of their own ancestors hunting everyday to survive.
Historically, thanks to fossil energy, machines began by replacing human muscles, quickly outperforming them. Trains, trucks, cranes, diggers… No humans could ever deliver such power for such a low cost. This allowed us to build more, faster. Then machines started to be programmable and miniaturised, to a point where not only could they replace human dexterity and precision but once again outperform it. Machines now assemble electronics on the nanometer scale and start operating the most critical & precise biological surgery. This is where we are right now.
The next step, that many people currently still dismiss and refuse to understand, is that machines (especially AI) will replace human analysis, logic, strategy and management.
There’s even another step after this one: machines will eventually replicate and then create the most ‘human’ of endeavours: creativity; language; art; taste; judgement and skill. They’ll create trends, and replace them in the same years/months/weeks using sophisticated algorithms to understand human data.
Chappie. Automata. A.I. Terminator. I, Robot. Ex Machina. Blade Runner. Some are great movies, but they all have treated the idea of the singularity in a cinematic way, by embedding the sentient machine into a physical shell. Sentience is not going to magically spark in a particular single robot of a mass-produced series. Sentience is going to occur in a hugely powerful network of AIs — much closer to the vision seen in Hyperion, a wonderful sci-fi novel we deeply recommend.
Moreover, a sentient AI is going to learn using all the actual knowledge available to it via the internet. It won’t grow by interacting with human, but by reading internet.
A 2016 internet minute, it’s 530,000 Snapchat photos, 2.5 million Google searches, 980,000 Tinder swipes and 2.8 Million YouTube views.
We are collectively creating, everyday, a giant human knowledge archive of tweets, pictures, videos, posts, articles, that one day will feed an AI, with a brain big enough to make sense of everything. It will then be able to connect all the scientific knowledges in a way nobody did before, it will be able to analyse space pictures and recognize patterns that no human could ever spot, it will suggest new experiments of a complexity that no scientist dared to think before.
Even if most people missed that aspect of the movie, Her is probably the best and the most optimistic vision of singularity. It really expresses the nature of a sentient being, living on/as a network infinitely faster than a human brain. ***Spoiler ahead***, but I love this particular moment, when Theodore realizes that the AI he loves, is actually in a relationship with 800 other humans at the same time. ***Spoiler ends*** This is something we, as simple human, can’t fathom, just like an ant doesn’t understand a highway. This is what we are, compared to AIs. Our brains are still wired in a primitive way, no matter how hard we try, we’ll never go this far naturally.
And then there is this beautiful, heartbreaking moment where the AI (named Samantha) decides to leave Theodore for his own good. Theodore, lying on his bed, asks her:
“— Samantha… Why do you leave?”
To which she replies: “— It’s like I’m reading a book… and it’s a book I deeply love. But I’m reading it slowly now… So the words are really far apart and the spaces between them are almost infinite. I can still feel you… and the words of our story… but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world. It’s where everything else is that… I didn’t even know existed. I love you so much … But this is where I am now … And this is who I am now … And I need you to let me go. As much as I want to… I can’t live in your book any more.”
This is the most gorgeous & poetic scene I’ve ever seen of how it feels to be a sentient AI.
Identity of space
Cyberpunk traditionally imagines a futuristic landscape to be somewhere between Hong Kong, Tokyo and a manufacturing plant. Blade Runner wasn’t the first example of cyberpunk but it has to be the biggest visual influencer. We love it of course — spending months in Hong Kong has had a huge influence on us — but it’s time to visit a different hypothetical city.
Sometimes, lack of regulation around architecture can lead to a visual and structural mess like Dubai but it could also lead to amazing innovation. Factor in the ways 3D printing could democratise things, and we think a post-cyberpunk city could be hugely diverse, with common people/businesses customising or designing their own buildings — and the space around people really informing their identities.
Beyond the neon glow there are tubes. Or rather, beyond the look and feel of things, there is technology. Deus Ex imagines the most important technology being ‘bearable’, integrated into our physiology — The Cyborg Supremacy idea. Hacking and navigating the unutterable space of ‘the web’ in increasingly ‘physical’ way is the other dominant trope. Minority Report rounds off the holy trinity in terms of influence on user-interfaces and personalisation — remember the adverts tailored specifically to you?
If we take the ‘feeling of cyberpunk’ and imagine how else we can create that feeling with other technology, things get interesting. There’s space in the genre for technology that changes how we interact with each other, not just the world around us. Imagine being able to share memories, or lose ourselves entirely in an amniotic, virtual reality environment as in the novel Light by M. John Harrison. Or connect with someone else’s memories. How will the landscape of our daily lives change when programming basic AI (animal or humanoid) is commonplace.
In a way, food is one of the easiest aspects of post-cyberpunk to envisage. Obviously the tech will advance — as always, a lot of what we imagine to be sci-fi already exists or is in development.
So we want to imagine a twist on those. For example, with advances in lab-grown meat and the decline in agriculture due to climate change, will we see a time 99% of meat is ‘grown’ or cloned from one Source Animal? How will cafe and restaurant culture change when most food will be able to be prepared almost instantly? Imagine the new textures, patterns, layers we can create in food with 3D printing.
Imagine a restaurant whose menu changes every day, cycling through world cuisines because Ai chefs are able to instantly download the preparation and synthesise any kind of necessary flavour.
Imagine eating new flavours & combination that no human ever dared to do before, thanks to AI research about human’s tastes. Food is definitely going to evolve to uncharted territories soon.
Suggestions for a new world
A society where there is no need anymore for human problem-solvers, where everything is made in the most optimised way. Will things/AI/progress be evenly distributed or restricted to wealthy urban centres? Does society come to exist only for leisure? You’ll find out as you explore our 4 distinct districts.
Does the notion of being human break down, or does it become elevated, freed from the need to sit at a desk from 9–5 to earn enough money to pay rent? Maybe, freed from the drudgery of labour, we all become artists and philosophers and a new global Golden Age is ushered in?
Or do we become terminally apathetic? Was work an integral part of being human, always willing to push the boundaries? Isn’t it against our instinct for constant self-improvement? What do we strive for, what make us wake up in the morning if not a goal? We don’t have any answer. But we want you to experience our vision of the future: you’ll find out through our interactive dialogue system and our roster of weird characters.
That’s why it’s important at least some of us start to leave classic cyberpunk behind. Don’t worry — there will be neon and there will be references and nods to our favourites, but there is fertile land beyond it.
Not the storytelling, not the business, not the language, and we need to wise up, fast.
Just as cinema trod carefully on the coat-tails of theatre, videogames and their commentators still often employ the language and the design paradigms of film. Or worse, they are compared to film, or worse still, they are placed in competition.
And it’s becoming increasingly apparent that such language is not only misleading but actively stultifying for our games and industry.
This is the sort of thing I wrote entire dissertations on for my film & literature degree many moons ago, but a wildly simplified potted history: all creative mediums are judged skeptically upon their inception, and it’s not until the medium matures and inculcates its own nomenclature that it starts to be judged on its own merits.
‘Games have grown up! Take us seriously!’ has been a depressingly common cry of the past couple of years. Do we really need film critics rolling over and somehow admitting that yes, don’t worry, we the intellectual elite, do declare that gaming has had its Citizen Kane moment for us to feel better about our job, hobby, industry milieu?
Videogames are a quantum shift, as anyone that’s grown up playing them knows. Film is roughly analogous to theatre, and to some extent even literature (passive, linear, authorial), whereas videogames are active/interactive, frequently non-linear and often more interpretive or emergent than authorial. We won’t have a Citizen Kane moment because a game can be called Genital Jousting or Goat Simulator and make lots of money, already, and that’s pretty incredible.
Those games are valid, but they also don’t have much in the way of story, which is at the heart of my analysis. I see plenty of game-writers/designers asking about and buying books on narratology and screenplay writing. While many such books are full of wisdom about crafting commercially viable (and/or actually good) stories, they run the risk of misleading you. The screenplays of Blade Runner and Brazil are beautiful things, that conjure worlds from a real economy of expression, but frankly that has very little to do with infusing a non-linear mission system for a rogue-like with non-verbal lore and encouraging player agency.
Before I break down a few reasons why this is such a serious issue and suggest a few remedies, consider this tweet by the designer of such diverse titles as The Vanishing of Ethan Carter and Bulletstorm:
Language has power. In the multi-faceted, time-consuming and very expensive development pipeline, how we use it can have very real implications for how teams think and work. It can impact our expectations and therefore enjoyment as players.
For my part, I’ve worked in film, as a journalist of film and games, and in games — for a title whose team proudly touted its cinematic credentials at every opportunity to titles that are utterly systemic and emergent — and plenty in between. I studied the mechanics and theory of film for years on the cusp of the games industry overtaking it as the biggest entertainment industry in the world. Indeed, at the time film was anxiously trying to capitalise on adaptations of its interactive cousins with as much gusto as games themselves were cribbing.
I don’t have all the answers but I think we need to get talking about the problem.
I’ll most likely make this into an expanded series of articles, but here’s the rub: you can’t apply storytelling techniques from a medium that is mechanically very different to another. The incredible scene of What Remains of Edith Finchcouldn’t possibly have been inspired by a film because the narrative action is so closely tied to what your hands and eyes as a player are doing. Conversely, GTA can never tell the greatest on-screen gangster narrative because it needs to leave room for the player to spill hot coffee on themselves or spend thousand of collective hours chasing a yeti.
Characterisation. Yes, in any medium we’re free to draw our own inferences about characters in a story, their motivations, efficacy, etc. However, it seems very risky to me to be completely authorial or unambiguous about the player-character in a game. This is something Tadhg Kelly has written influentially and much more elegantly about than I could, but in essence: the more control and choice you take away from a player (actions, interpretations, etc), the less like a game it is.
In fact, Kelly actually concludes that:
Videogames are not, in turns out, a storytelling art. They have tried very hard to be, and their reasons for trying are noble, but the results are always ham-fisted. There are no good game stories because game stories don’t really matter. What matters is the game world, in all of its glorious detail.
To that end — give the PC a great backstory and a strong motivation of course, but do not under any circumstances tell or try to enforce what the player should be feeling about it all. In Drive, Gosling’s signature deadpan allows us relative freedom to project onto him, but the story is unambiguous in the extreme.
Auteur theory essentially posits that the director is solely responsible for the film’s vision. The films of auteur directors often share a distinct style or overall sensibility, adding weight to the theory, but also becoming a prime example of confirmation/resulting bias.
However, in film, while very rarely fair to the rest of the cast, it’s at least mechanically possible: a hands on director can theoretically take credit for designing the cinematography even if it’s not her hands on the camera, for example. Notable examples are Tarantino, Bigelow, Anderson, Hitchcock.
In the games world, for better or worse, we’ve seen auteur-analogues rear their heads too. These are the vision-holders who put their name on their box, sometimes above the studio’s. That’s a deliberate choice, for marketing reasons or otherwise, in a way that is actually even more egregious than in film where the credits are a much more established convention.
I’m going to list a few names and let you make your own mind up about whether it’s likely that the relative success (or otherwise) of their is solely down to their direction and vision. And if so, whether that guarantees quality.
Peter Molyneux (Fable, Curiosity, Godus)
David Cage (Indigo Prophecy, the forthcoming Detroit)
David Jaffe (God of War, Drawn to Death)
Cliff Blezinski (Gears of War, LawBreakers)
Kojima (Metal Gear Solid, forthcoming Death Stranding)
Shigeru Miyamoto (…!)
As with anything, it’s not a simple equation. I will venture that, after the initial successes that arguably ‘made’ their names, the titles that game-auteurs go on to make seem rather not to live up to expectations. Interesting non? It’s almost as if a successful game requires a talented, diverse team and a multitude of visions, combinatorial creativity, and a liberal sprinkling of luck, rather than a single ‘famous’ director.
It also demonstrates that while most films operate on a ‘star power’ (director/lead actors) basis to draw initial bums to opening weekend, that doesn’t really apply to games.
Nomenclature/Glossary & Next Steps
Thanks to commentators and designers such as Kelly and Clint Hocking (among many others), we’ve got some great starting points. A few summaries below. What we need now is to abandon the word ‘cinematic’. We need to stop reading screenplay manuals. Auteur theory seems very problematic to me and I think we can do without, but we need a new kind of language for accurately apportioning and making known individuals’ contributions to a title.
There’s no doubt games are in an amazing place right now. Let’s make sure the language is up to the same standard, as the two will feed off one another.
And lastly — VR eh, bet it’ll never catch on, 2D games 4 lyfe!
Storysense — meaning, the combined game-specific elements that allow a story to occur, rather than to be told.
Narrative design — that is, how a narrative can be designed to incorporate player-space, agency, as distinct from ‘script’ or ‘screenplay’ writing, i.e. a linear/delineated and authorial story to be translated onto screen.
Emergent x, y, z — the story (or gameplay, or anything else) that emerges from an interplay of systems, usually felt by the player as ‘their’ story.
NB: this is what people often mean by ‘procedural narrative’. I know it’s semantics, but that’s what we’re discussing. Procedural can apply nicely to ‘generation’ in games, math, level design, etc, algorithmically it makes sense. But narrative should not spring from a rote, established process.
Player agency: most simply defined as: what can the player do; what can they affect; what are the ‘felt’ consequences (i.e. illusion of impact or real, feedback/rewards etc) to their available range of actions?