64: Good product management

64: Good product management

I’m writing about 100 things I’ve learned the hard way about product management. You can catch up on the previous entries if you like.

What does it mean to be a good product manager in 2018?

Video

You can watch a more recent version of this talk:

Slides

Transcript

For a while now it has no longer been solely about participating in the time-honoured rituals of whatever agile methodology your team happens to use. Nor is it only about creating products your users need.

We know that product managers have always needed to stay abreast of the advances in technology that could potentially revolutionise their products or render them obsolete. Lately it feels that the pace of change has shifted up a gear. What might have been considered a technological pipe dream only a few years ago is already available to all ‘as-a-service’.1

When Google demonstrated its Duplex personal assistant at Google I/O in May 2018, the world suddenly realised that natural language processing (such as that which lets your Alexa or Google Home understand what you’re saying to it) appeared to have blown through the Turing test with barely a murmur of difficulty.2

Add to that the confluence of media coverage of the Facebook / Cambridge Analytica data privacy scandal with the introduction of the General Data Protection Regulations (GDPR). Even if GDPR hadn’t made front-page news, nobody would have missed the deluge of emails from desperate companies seeking consent to use information they gathered years ago and hadn’t quite managed to legitimise in the intervening period. People are becoming increasingly aware of just how much of their personal information is held by organisations and the value their data holds.

Perhaps we’ve been caught a little off-guard by the implications of these new technologies. These have presented product managers with yet another new challenge to add to the growing list: how to create products that are not only successful but also ethical.

The gift of fire

Once upon a time, in an age when god and humans still lived together, humans lived long lives, free from the burdens of disease, old age and hard work. This way of life was possible because they had been granted the gift of fire – or technology – from the god Zeus.

But with all this technology to help them (and work wasn’t that hard in the first place), the humans became complacent and lazy. “You can do enough work in one day to supply you for a year,” said Zeus. Yet the humans didn’t change their ways and continued to neglect the world around them, letting it fall into ruin.

So to teach them a lesson, Zeus took back his gift of fire, leaving the humans to have to go back to working without the benefit of technology.

Enter Prometheus. He took it upon himself to steal back the fire from Zeus and return it to the humans. Unsurprisingly, this act of trickery angered Zeus, so he punished everyone.

He made humans to grow old and suffer diseases, and he made their work difficult and full of toil, despite their use of technology. For Prometheus, he had something special lined up: an eternal cycle of having his liver eaten by an eagle every day and regrown every night.

So no real winners, then.

Now despite this being an ancient myth from over 3,000 years ago, the story holds some lessons about our current relationship with technology. We each have a device in our pocket that can access almost all human learning to-date, and we still seem to use it primarily to watch amusing cat videos and people unboxing their latest tech acquisition.

Many people may remember Prometheus as the one who stole fire from the gods, but few realise that the humans had fire to begin with before they abused the privilege of having it. But just like the myth, we’re squandering our gift of technology and letting the world go to ruin about us.3 We’ve become technology zombies, unable to break the perpetual gratification loop that forces us to continually check our phones for new notifications.

It’s about people, not technology

Technology itself isn’t to blame. After all, fire can be used both to keep people alive and warm, and to kill them. It’s how people decide to use technology that’s the problem.

Some people use technology to hide from scary, messy human stuff, like having to deal with our own or other people’s emotions. Some people hide from the consequences of their actions – saying or doing whatever they want when they think they’re anonymous online. And some people hide from seemingly the scariest thing of all: having to get up and talk to people.

We might put off speaking to our users because we fear the prospect of negative product feedback. We might email Procurement rather than speak to them in person because it’s easier to vent frustration at a faceless email recipient. Whatever the reason, we hide behind our emails.

At MIT in 2011, one of Professor Alex Pentland’s teams had been researching the social factors that lead to the highest performing teams. They found empirically that the most valuable form of communication is face-to-face, while email and texting are the least valuable. Anecdotally, I’ve also seen many team problems vanish when they stopped emailing each other and started talking to each other instead.

People hide behind social media, which allow them to present an edited (or possibly untruthful) view of themselves to others. They are portraying themselves as the person they want others to see, not necessarily who they really are.

Immersive games let people take this one step further. In-game avatars let people reinvent themselves and pretend to be whoever and whatever they want.

These are all defences. Whether consciously or otherwise, all this use of technology obscures our true, human self from others.

So why do people do this? Perhaps it’s because some (myself included) find being themselves in front of others is emotionally tiring. Not everyone is an extrovert with limitless energy and desire to put themselves out there. Whether it’s the fear of being judged, bullied, coming into conflict or some other reason, some people are understandably reluctant to open themselves up to the possibility of a negative human interaction.

Psychological safety

Nevertheless, we have a fundamental craving for human interaction, but we need the reassurance of psychological safety4 when doing so. So we find a safer way to satisfy our need for human interactions: we bestow our technology with human, emotional traits.

It might be giving our car a name and a face, finding ourselves saying ‘please’ and ‘thank you’ to Alexa, or how we generally represent robots and artificial intelligences (AIs) in fiction – as human-like or aspiring to become human. And of course in so many of those fictional representations, the AI turns evil and decides to eliminate all the humans.

However, like the fire Prometheus stole, AIs are not human. Fire doesn’t choose or care whether it warms us or burns us to death. It’s just fire. Likewise, AIs need not have humanlike motives. There’s no indication they’ll think like us and desire freedom. They may well have no desires at all or any urge to act in any way beyond fulfilling their allotted task.

Technology will have no vendetta against us, no agenda. The uprising won’t look like Arnie’s Terminator, it may be far more mundane, like Microsoft’s Clippy.

Rise of the machines

Nick Bostrom is a professor at the University of Oxford, and heavily involved in AI. In 2003, he wrote that the idea of a superintelligent AI serving humanity or a single person was perfectly reasonable. He defined a superintelligent AI as one that could adapt to any task, unlike the single-task AIs that we see conquering the games of chess or Go.

He added, “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible.”

In 2017, Frank Lantz created a game based on this concept. In Universal Paperclips, you play as the superintelligent AI tasked with making paperclips ever more efficiently. With frightening speed, your AI naturally improves itself to become better and better at producing paperclips. Without any fuss or emotion, it circumvents anything that impedes its primary goal (including those interfering humans), until absolutely all matter on Earth is consumed and converted into paperclips. Then it sets about converting all that matter lying around the rest of the universe until logically nothing remains but the hardware of the AI itself which, as a final act, it also converts.

At no point is the AI presented as malicious or evil. It’s simply completing the task it was set as efficiently as it can. As product managers, we’ll be the ones who set off the AI with the task of converting everything into paperclips. The AI will only be as good as the task we set it, and it’s our responsibility to consider the implications of the AI fulfilling that task.

Although superintelligent AIs (probably) don’t exist yet, a more simplistic machine learning algorithm is similarly only going to be as good as the data with which we train it.

You may remember Tay, a chatbot designed to emulate the speech patterns of a teenage girl, which Microsoft set loose on Twitter in 2016. Within a day, the behaviour it learned from trolls turned it into a right-wing, racist holocaust denier. Tay’s handlers quickly took it offline, did some tweaking and tried again. Same result, twice as quickly.

It’s now becoming clear that a machine learning algorithm will adopt the biases of its training data as the norm. So if we were to train a machine to predict future Fortune 500 CEOs based on analysis of previous ones, it would probably bias towards older, white men.5

The thing is that this is no longer a theoretical problem. In the US, many courts of law use a system called COMPAS to assess the likelihood of a defendant reoffending when determining what level of bail to set. An investigation by ProPublica determined that the algorithm exhibited racial bias. It wrongly flagged black defendants as almost twice as likely to reoffend as white defendants (45% to 24%) when in fact they didn’t reoffend. Whereas it made the opposite mistake for white defendants, flagging them as a low risk of reoffending (48% to 28%), but who then did go on to reoffend.

We therefore have to take care in how we train our machines. As product managers, we have the additional responsibility to ensure that the training data we use is accurate and unbiased.

Remembering to be human

It’s not too difficult to see how we ended up in this situation. In companies beholden to their investors and shareholders, there’s such a strong financial imperative to boost profits that it shouldn’t come as a surprise when liberties and shortcuts are taken. But in doing so, those organisations are aiming for the wrong targets. By placing the profits and share price over the needs of their users, they’ve forgotten to be human.

It used to be the case that product managers simply gathered a laundry list of requirements from around the business called a market or product requirements document, threw the thing over the fence to a development team, who would then spend the next couple of years building at great cost whatever nonsense that document contained. The resulting shambles of a product would be unleashed on an unsuspecting public, who unsurprisingly would often find it met none of their needs whatsoever.

Instead, we now put user needs right up-front. We speak to actual users, so that we can appreciate their context, the problems they face and discover the nuances that we would never have otherwise understood. As we figure out what the solution needs to be and ultimately build the product, we continue the dialogue with users to ensure we’re staying on the right track. This way we end up with something that has been meeting user needs all the way through the process.

Companies tend to forget that without users, they have no viable business model, so the solution is simple: put user needs first.

Even if we’re focused on solving user problems with our products, rather than solely on increasing shareholder value, this is no longer enough. We now have to consider the possible ways our products could be abused. We might, for example, consider geofencing6 regular vehicles, not just autonomous ones, to prevent them being driven in pedestrian areas.

While we’re at it, we could consider geofencing recreational firearms so that they could only be used on registered firing ranges or within the owner’s home (for home defence). It would be naïve of me to suggest that these examples would prevent all abuse of the product – a determined individual will always find some way to circumvent a safeguard – but in considering the possible abuses, we can at least provide some additional protection for people over none at all.

Ethical product management

This is why we need to consider the ethics of building products. When I gave this presentation at DevTalks in Bucharest, I got the audience to vote on a few ethical dilemmas.

Here are the scenarios I presented:

A supermarkets analyses loyalty card data to predict when someone is pregnant to send them specific discount vouchers.

  • Is it okay to send the pregnancy discounts to the loyalty card member?
  • Is it okay if the person doesn’t know they’re pregnant yet?
  • Is it okay if the pregnant person is a teenager who hasn’t told their parents?

It is possible to predict where people at high risk of domestic abuse live.
Bearing in mind no actual crime has taken place yet, is it okay for the police to take preventative action before it happens? (Like placing a protective order on a potential victim)

  • Is it okay if doing so causes the abuse to happen?
  • Is it okay if doing nothing means the abuse is allowed to happen?

A machine calls a human and speaks with convincing mannerisms to book a hair appointment all by itself.

  • Is it okay for an AI to mimic a human voice to make calls in this way without announcing itself as a machine?
  • Is it okay for a company to use this technology to make millions of sales calls?

A couple of the ethical questions attracted a clear-cut response, but the majority tended towards a 50:50 split. This really highlighted that ethical questions tend to be murky and can be argued both ways. There is often no clear-cut answer. Yet another challenge to our list.

7 ways to think more ethically

1. User safety

Would using the product or publishing data gathered put someone at risk of harm?

  • Publishing regular running routes might increase the risk of assault
  • Publishing smart meter data of typical usage might increase risk of burglary
  • Publishing domestic abuse reports by area might increase the risk of further domestic abuse

2. User privacy

Would the user wish their use of the product and their associated information to remain private?

This might include sensitive information such as their date of birth, bank balances or medical history.

3. User security

Could use of the product or the user’s information published be used to gain access to systems or more sensitive information?

  • Piecing together security questions for telephone banking
  • Inferring sensitive information
  • Gaining access to or shutting down systems (home or car, medical, civic infrastructure)

4. Securing sensitive information

Does your product keep sensitive information securely?

  • How do you safeguard against data breaches?
  • How do you prevent a rogue sales guy walking off with your database of hot leads?
  • How do you prevent a rogue developer creating a backdoor to steal credit card details?
  • How do you guard a careless employee leaving data on a train (USB stick, laptop etc.)?

Do your users truly understand how you will use and share their information?

  • Confusing opt-in / opt-out doesn’t fly
  • Buried consent in long terms and conditions that you need to be a lawyer to understand doesn’t fly either

6. Inferred information

If you can infer information about individuals from other data, have you obtained consent for that also?

In May 2016, Danish researchers published a set of data onto the Open Science Framework7 that had been compiled by scraping the users’ answers to personal questions on topics such as sexual habits, politics, fidelity, feelings on homosexuality and more from the OkCupid dating site. The researchers never contacted OkCupid or its users to request permission to use the data.

While the data dump did not reveal anyone’s real name, it was entirely possible to use clues from a user’s location, demographics, and OkCupid user name to determine their identity.

When challenged on this point, one of the researchers Emil Kirkegaarde justified his actions because the source data was already public.

7. Minimal extent and duration

Do you gather the bare minimum of information you need to provide a service, then dispose of it as soon as the service has been provided?

For example, my insurer records calls for training purposes, but they deliberately pause the recording when taking payment details from a customer so that they don’t keep the information beyond putting the payment transaction through.

Your new challenges

Product management practice continues to evolve as advances in technology throw up new challenges for us to deal with. We have the additional responsibilities to provide psychological safety for our team, to create successful and ethical products, but ultimately to remember to be more human. This is what it means in 2018 to be a good product manager.

Notes #

  1. Natural language processing in particular, such as IBM’s Watson, Google’s Natural Language API, Amazon’s Lex, Microsoft’s Language Understanding Intelligence Service, and others.
  2. It hadn’t, due to the specific conversation topic and short discussion time, but it certainly sounded very convincingly human.
  3. I may be overstating the case slightly, but I’m taking a smidge of artistic licence here to make A Very Meaningful Point.
  4. Psychological safety is when you feel safe in front of your peers to make mistakes, speak your mind, take risks – to be yourself without being judged.
  5. In 2017, only 32 out of the Fortune 500 CEOs were women, and depressingly this is this highest number to-date.
  6. Geofencing is when a particular product is prevented from being used outside of approved geographical areas.
  7. Since removed after OkCupid filed a Digital Millennium Copyright Act complaint.

Get articles when they’re published

My articles get published irregularly (erratically, some might say). Never miss an article again by getting them delivered direct to your inbox as soon as they go live.  


Read more from Jock

The Practitioner's Guide to Product Management book cover

The Practitioner's Guide To Product Management

by Jock Busuttil

“This is a great book for Product Managers or those considering a career in Product Management.”

— Lyndsay Denton

Jock Busuttil is a freelance head of product, product management coach and author. He has spent over two decades working with technology companies to improve their product management practices, from startups to multinationals. In 2012 Jock founded Product People Limited, which provides product management consultancy, coaching and training. Its clients include BBC, University of Cambridge, Ometria, Prolific and the UK’s Ministry of Justice and Government Digital Service (GDS). Jock holds a master’s degree in Classics from the University of Cambridge. He is the author of the popular book The Practitioner’s Guide To Product Management, which was published in January 2015 by Grand Central Publishing in the US and Piatkus in the UK. He writes the blog I Manage Products and weekly product management newsletter PRODUCTHEAD. You can find him on Mastodon, Twitter and LinkedIn.

Agree? Disagree? Share your views: