Accessible unethical technology

A head and shoulders shot of Laura looking upwards with a happy smileThe Accessibility Scotland 2019 theme was Accessibility and Ethics. In Accessible ethical technology, Laura Kalbag @LauraKalbag spoke about some of the big problems we face in tech today, and how we can start building inclusive and ethical technology tomorrow.

Mainstream technology today is designed to track and exploit us. As technologists who are also users, we both contribute to this exploitative system and are exploited by it.

Is making our tech accessible enough, when the driving forces behind that tech are unethical?

Can tech be inclusive if it causes disproportionate harm to people from marginalised groups?

We can do better.



Accessible unethical technology slides on Notist website


Yeah, I’m going to talk about accessible unethical technology.

Since I wrote a book about accessibility, I like to say I wrote this book; because some people, yeah.

I’ve had many people excitedly share with me the cool work that they do around accessibility and what their organizations do.

That’s really great. I can learn a lot from those people, and I love it.

However, too often these organisations, while they might have good people working for them, the actual things they produce are not so good.

So, while my book is about accessibility, and accessibility is a core value in my work, my job is not entirely being an advocate for accessibility.

I’m not an expert, like many of the speakers here today.

My job is the co-founder of Small Technology Foundation.

It’s a not for profit organisation advocating for and building technology that protects personhood and democracy in the digital network age. Ethical technology.

While inclusivity is such a key part of ethical design, this still leaves me with a bit of a conundrum. I want to be excited about really brilliant work that people are doing with accessibility, but I don’t want that work to be used to exploit people, or to unethical ends.

I keep coming back to this wise tweet by the brilliant Heydon Pickering.

“Not everything should be accessible. Some things shouldn’t exist at all.”

But what are the things that shouldn’t exist? What defines unethical technology?

What follows is a incomplete list of some ways specifically internet technology, because obviously technology can be used across many industries, and the ways they can be unethical.

1. The inequality in distribution and access.

Which is a key issue that all of us here seem to focus on.

The lack of accommodations for many disabled people could often be considered discriminative, and the same goes for the lack of access for poor people, or people who can’t afford the latest device, or expensive data plans.

2. The lack of accountability and responsibility.

This is a huge area; like misinformation. These centralised platforms that we use that encouraged indiscriminate sharing, which results in the rapid and viral spread of misinformation, like profiling, extracting personal information from every data point of what a person shares online, their devices, their locations, their friends, their habits, their sentiments.

It’s an invasion of privacy, and it’s done so without their true consent.

Like automated decision making, using extracted personal information to make automated decisions; like whether someone would be the right candidate for a job, or qualify for credit, or is a potential terrorist.

It’s usually irrelevant whether these decisions are based on inaccurate or discriminatory information.Like targeting, the profiling and targeting of individuals enables manipulation by the platforms themselves, or by advertisers, or by data brokers, or governments that use the information provided by the platforms.

Like insufficient security, particularly in situations where a platform collects people’s personal information, like your credit card information.

It’s irresponsible to not adequately protect that information. It’s a risk to store that information in the first place. If we do, we have to ensure it is stored securely.

So, whether it’s the business model, or the management, or the designers and the developers, or anybody else in the organization, often organizations don’t actually care to address the impact of their work.

Or worse, they deliberately design harmful outcomes that just deliver them more power, more money.

These are topics that have been explored in depth recently by Tatiana Mac @TatianaTMac. I recommend you follow her; who invokes intent does erase impact. To describe our often haphazard way that we design technology.

In her A List Apart article, ‘Canary in a Coal Mine: How Tech Provides Platforms for Hate,’ she explains,

“As product creators, it is our responsibility to protect the safety of our users by stopping those that intend to or already cause them harm. Better yet, we ought to think of this before we build the platforms to prevent this in the first place”.

But people often say, ‘Ask for forgiveness, not for permission,’ in order to encourage you to be more daring. But instead, I think we require ask for permission, not for forgiveness.

Back to the unethical technology list.

3. Environmental impact.

Now that people are starting to take the climate crisis seriously, it’s time to reckon with the environmental costs of internet technology at scale.

The impact of running massive server farms, of block chain technology which turns electricity into currency, of machine learning that uses vast quantities of energy.

4. Business ethics.

The ethical considerations that really apply to business in any industry, but they’re exacerbating factors when combined with inequality in distribution and access, and a lack of accountability and responsibility, or environmental impact.

Like proprietary locking, once you sign up for a product or a service, it’s difficult to leave.

To some, this could be a minor inconvenience.

But if your privacy is being routinely violated, the requirement to leave becomes paramount. Such as an industry monopoly.

What’s worse, no alternatives exist. What if the only social platform we can use is one that’s environmentally damaging?

What if the only way we can participate in simple society is not accessible?

Earlier I mentioned that profiling is one of the key ethical issues in technology today.

Profiling is enabled by tracking. In order to develop profiles of you, they need data points.

Those data points are obtained by tracking you using any kind of technology available.

I didn’t imagine myself spending my days examining the worst of the web, but here I am.

I block trackers, and I’ve worked on better blockers tracker blocking rules for the last four years.

Still find it hard to say, trying to work out what are these trackers actually doing, or why are they doing it?

And blocking the bad ones to try to protect people who use the product.

As much as I try to uncover bad practices and block and break invasive trackers, they just keep getting sneakier and more harmful.

So, what is a tracker? What’s the kind of stuff that I block? I’m going to give you an example.

I visit the City A.M. site, because I’m well into business news. I went to look at the third party resources that it uses. These third party resources, the images, the scripts, the stuff like that, are usually a good indicator of basic trackers. As third party services, they can track you across the web on the different sites that use those shared trackers.

So, a quick inspect of the City A.M. site reveals 31 third party trackers.

That might seem like quite a lot to you, but actually for a new site, that’s very little.

I pick out one tracker at random,, or ADNXS. I have a look at the statistics that we’ve got for all the sites that we’ve found a call from

It’s on 16.9% of the top 10,000 sites on the web. We class that as a pandemic.

So, I check ADNXS into a browser to work out what it might be, and it’s AppNexus.

Apparently AppNexus powers the advertising that powers the internet.

Apparently, they say, “Our mission is to create a better internet, and for us, that begins with advertising.”

I’m not sure many people would share that opinion, perhaps Mark Zuckerberg.

I scroll down to find their privacy policy, and they’re usually found tucked away in a footer in text that is so tiny and so hard to read, that it may as well be invisible.

That’s what they’re hoping for. A quick browse of the privacy policy alongside the script itself, and I come to understand that the AppNexus tracker is tracking visitors to create profiles of them.

These profiles are then used to target ads to the visitors for products they might find relevant. We all know what targeted advertising looks like.

The same products that follow you around the web all the time.

On Facebook, I get a lot of ads for women in their 30s: laundry capsules, shampoo, makeup, dresses. It’s really reinforcing some great stuff, pregnancy tests, because that’s a fun thing to see an ad for.

Going back to the AppNexus privacy policy. Conveniently, they have a section detailing what information do we collect and use?

This includes a long, long list, so I’ve decided to pick out some of the scariest things.

Your device make and model, precise geographic location data, web pages or apps visited or used, and the times those web pages or apps were visited or used.

Information about you, or inferences about your interests that a seller, buyer, or third party provider has collected about you and shared with us; such as information about your interests or demographic information.

In brackets, your age, or your gender. They go to great lengths to emphasize they don’t collect that information themselves. They just get it from other people, like that’s better. I’m curious about what might these interests be.

What might AppNexus knows about me?

So Cracked Labs have done multiple reports into the personal data that corporations collect, combine, analyse, trade, and use.

Much of the combining and the analysing and the trading bits of the data is done by data brokers.

Two of the biggest data brokers are Oracle and Acxiom. According to Cracked Labs, Acxiom provides up to 3,000 attributes and scores on 700 million people in the US, Europe, and other regions.

Oracle sorts people into thousands of categories, and provides more than 30,000 attributes on two billion consumer profiles. So what attributes and what categories?

Again, picked out some of the creepiest ones, because this text is really tiny. One of nearly 200 ethnic codes.

That’s a reassuring word. Political views, your relationship status, your income, details about banking and insurance policies you hold, the types of home you have, including if your home is prison.

The likelihood of whether a person is planning to have a baby or adopt a child, the number and age of children they have, the purchases, including whether a person has bought pain relief products and health-related products.

Whether a person is likely to have an interest in the Air Force, or the Army, or the Navy, the lottery or sweepstakes, or gay and lesbian movies.

The search history, including whether a person has searched about abortion, legalising drugs, or gay marriage. Whether they’ve searched about protests or strikes, or boycotts, or riots. The likelihood that a person is a social influencer or might be socially influenced.

It’s not just the data brokers that are doing this.

Most platforms are creating profiles of you, using them to target you and organize your personal feeds to keep you engaged and interested in their sites.

These attributes can be used to target you with advertising, not just for products you might like, but including the ads that political parties put on Facebook.

That’s what the Cambridge Analytica scandal was about: political parties having access to not just your attributes, but your friends’ attributes, and your friends’ friends’ attributes.

It’s no longer just your personal information, and it’s patterns or your habits.

Facial recognition and sentiment analysis is also being used to create a deeper understanding of you.

As I could go on about this forever, and I just don’t have the time, but the book, ‘Surveillance Capitalism’ by Shoshana Zuboff  @shoshanazuboff is very long, it’s very dense, but it does go into this in great deal containing both the history of this type of surveillance, and the predicted future that these massive complex, like massive surveillance systems have.

As she writes

“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to product or service improvement, the rest are fabricated into prediction products that anticipate what you will do now, soon, and later.”

So, it’s scary stuff, and I wanted to go in briefly into how we can protect ourselves just as individuals who use technology.

I’m not talking about security tips about good passwords and stuff like that, but how to protect yourself from tracking by the websites that you’re visiting, and the third party services that they employ on those websites.

  1. Avoid logging in if you can.
    Many platforms will still track you via your IP address, or fingerprinting, but whatever you can do to minimise them connecting what you’re browsing with your personal information, can help you stay protected.
  2. Avoid providing your real name.
    Of course, this is much trickery in professional settings, but where you can, use a pseudonym, a cute username.Prevent platforms from connecting the data they collect about you to further data from other platforms.
  3. Use custom email addresses that look a little like this:
    So you have your real email address, a plus, and then the custom bit.
    Many email services support custom email addresses like this, and will forward any emails to your primary inbox.
  4. Avoid providing your phone number.
    Folks in privacy and security will recommend you use two factor authentication; so you confirm your login on your phone or your email, or something like that.
    But be aware that your phone numbers are a little risky sometimes.
    A study into whether Facebook used personally identifiable information for targeted advertising found that when we added and verified a phone number for two factor authentication to one of the author’s accounts, we found that the phone number became target-able after 22 days.
    Not long ago, Twitter admitted that they did the same thing. When an advertiser uploaded their marketing list, we may have matched people on Twitter to their list based on the email or phone number the Twitter account holder provided for safety and security purposes.
  5. Use a reputable virtual private network, or VPN.
    VPNs will obscure your location, making it harder to track you and associate your browsing habits with you as an individual.But then you also have to make sure that the VPNs themselves aren’t trying to track you, because cheap and free ones, that’s how they make their money.
  6. Use private browsing or browsing containers when you log in to a platform.
    This will make sure that any cookies that are set during your session will be discarded when you close that window or that container.
  7. Log out.
    Help prevent platforms from continuing to track you on other sites, especially those social media buttons, by logging out.
    Once, I found that a site I was reading, a new site, was sending names of the articles I was reading back to Instagram, because I hadn’t logged out of Instagram.
  8. Disallow third party cookies in your browser preferences.
    This will probably break a lot of sites, but where you can, it will protect you.
  9. Use a tracker blocker.
    Far be it for me to recommend something I built myself. We built it out of necessity.We built it because there wasn’t anything else doing what we were doing. It doesn’t block everything. It does block a lot of stuff.
    But we also recommend uBlock Origin for platforms that support it; because most blockers, ad blockers, they track you, too.
  10. Use DuckDuckGo for search, not Google.
    Google has trackers on 80% of the top websites in the world.So don’t give it more intimate information that’s contained in your search history; the symptoms that you’re worried about, the political issues that you care about, the products you’re looking to buy. DuckDuckGo is a privacy respecting search engine alternative.
  11. Don’t use Gmail.
    Your email not only contains all your communication, but the receipts for everything you’ve bought, the confirmations of any events you’ve chosen to attend, and every platform newsletter and service you’ve ever signed up for.There are plenty of email providers out there that are privacy respecting.That’s actually … You got quite a lot of choice there. Two good options are Fastmail and ProtonMail.
  12. Don’t use What’s App.
    What’s App’s message contents may be encrypted; but they don’t encrypt who you’re talking to and when you’re talking to them.
    There are privacy respecting alternatives for messaging, including Wire.
  13. Don’t use Facebook for everything.
    I’m very aware that there is no good alternative for Facebook when your school is using it to communicate with parents, or when your friends are sharing baby photos; but you can find alternatives for chat like Wire can replace messenger and Facebook Pages. I mean your website gives you a lot more control over what you’re publishing.
  14. Seek alternatives to the software you use every day. Switching software has a great list of resources, and they’re provided by people who really care about ethical technology. So of course, these are all the choices we can make once we’re on the web, but we have to be aware of the other places we’re tracked, too.
    Third party services that website use, not necessarily visible to us; also, anything that uses internet connection, or the Internet of Things.
    The web hosts hosting websites can also track us, the browsers can also track us.
    Beware of any browser that wants you to sign in, or uses telemetry.
    Our operating systems can and do also track us. Even our internet service providers, our ISPs, have records of what we’re browsing online.

Your choices affect your friends and family, and your friends and family’s choices affect you.

If you don’t use Gmail, that’s fine; but if all your friends and family do, Google still gets a lot of information.

You may not use Facebook, but Facebook still has shadow profiles based upon the contacts that other people have uploaded.

This all seems like a lot of work, right?

I know. I advocate for privacy. I don’t have the time to use all these resources, or do this all the time.

That’s why it’s unfair to blame the victim for having their privacy eroded. Not to mention that our entire concept of privacy is getting twisted by the same people who have an agenda to erode it.

One of the biggest culprits in trying to redefine privacy is Facebook.

This ad might be familiar to you. It’s been recently shown on TVs. It shows a person undressing behind towels held up by her friends on the beach.

Facebook advert showing a person undressing behind towels held up by her friends on the beach

Alongside the Facebook post visibility options, which include public, friends, only me, close friends.

The ad explains all light-hearted,

“We all have different privacy preferences in life,”

and it ends saying,

“There are a lot of ways to control your privacy on Facebook.”

It doesn’t mention that friends, only me, and close friends, should really read friends and Facebook, Inc., only me and Facebook Inc., and close friends and Facebook Inc.; because you’re never really sharing something with only me on Facebook. Facebook Inc. has access to everything you share.

Privacy is the ability to choose what you want to share with others, and what you want to keep to yourself.

Facebook shouldn’t be trying to tell us otherwise. Google doesn’t believe in privacy, either.

Ten years ago, Eric Schmidt, the CEO of Google, he’s now the executive chairman of Alphabet, Google’s parent corporation, famously said,

“If you have something you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

To which I respond, “Okay, Eric. Tell me about your last trip to the toilet.”

Do we need to be smart about what we share publicly? Sure.

Don’t go posting photos of your credit card or your home address online. Maybe it’s unwise to share a photo of yourself blackout drunk when you have a job interview next week.

Perhaps we should take responsibility for the awful things we say to other people online; but we shouldn’t be worried about what we share online for privacy reasons.

We should be worried about it for social reasons. We shouldn’t be needing to take steps to protect ourselves from corporations and governments.

Right now, the corporations are more than happy to blame us for our loss of privacy.

They say we signed the terms and conditions. We should read the privacy policy. It’s our fault.

However, an editorial board op ed in The New York Times – How Silicon Valley puts the con in consent  pointed out that the current state of privacy policies is just not fit for use.

The clicks that pass for consents are uninformed, non-negotiated, and offered in exchange for services that are often necessary for civic life.

As the topic of an entire documentary from 2013 called ‘Terms and Conditions May Apply,’ it’s still good. It’s still relevant.

Two law professors analysed the sign in terms and conditions of 500 popular US websites, including Google and Facebook, and found that more than 99% of them were unreadable; far exceeding the level most American adults read at, but still enforced.

It’s not informed consent when you can’t understand the terms.

How can we even truly consent if we don’t know how our information might be used against us?

It’s not true consent if it’s not a real choice. We’re not asked who should be allowed access to our information, and how much information they’re allowed, and how often and for how long, and when?

We’re just asked to give up everything or get nothing. That’s not a real choice. It’s certainly not a real choice when the cost of not consenting is to lose access to social, civil, and labour infrastructure.

There’s a recent paper by Jan Fernback and Gwen Shaffer, ‘Cell Phones, Security and Social Capital (PDF).’

The paper examines the privacy trade-offs that disproportionately affect mobile-mostly internet users.

I’ll get into that more later, but for now, I’ll just leave you with an excerpt that shows the cost of not giving consent.

“Some focus group participants reported that in order to maintain data privacy, they modify online activities in ways that harm personal relationships and force them to forgo job opportunities.”

The technology we use are on you everyday things.

It forms those vital social, civil, and labour infrastructure. As largely helpless consumers, there’s not really that much we can do to protect ourselves without a lot of free time and money.

That’s why I wanted to get to the stuff we don’t often get to discuss in-depth at other events; where I think that folks like you might get this.

I wrestle with these kinds of things quite a lot. I hope you’ll endure me if I don’t really have the right words for it in places, but it’s the reasoning behind the talk.

When I saw that the theme of the event was accessibility and ethics; I want to talk about accessible unethical technology.

So, when discussing inclusivity, we often discuss intersectionality.

Intersectionality is a theory introduced by Kimberlé Williams Crenshaw @sandylocks, where she examined how black women are often let down by discrimination narratives that focus either on black men as the victims of race-based discrimination, or white women as the victims of gender-based discrimination.

Those narratives don’t examine how black women often face discrimination compounded by both race and gender, and discrimination unique to black women, and discrimination compounded by other factors, such as class, sexual orientation, age, and disability.

Kimberlé Williams Crenshaw uses intersectionality to describe how these overlapping or intersecting social identities, particularly in the cases of marginalised and minority identities, relate to systems and structures of domination, oppression, or discrimination.

We see this in accessibility and inclusivity, when we understand that a disabled person frequently has more than one impairment that might impact their use of technology.

The impairments reported by disabled people in the UK in the year 2017 to 2018, because apparently that’s a year, shows that many disabled people in the UK have more than one impairment that affected them just in the last year.

Those impairments also intersect with a person’s race, their class, their wealth, their job, and so many other factors.

For example, a disabled person with affordable access to a useful assistive technology will likely have a very different experience from a disabled person who can’t afford the same access.

We rarely discuss the ethical considerations of a product, or a project, or a business; and how that intersects with inclusivity.

Like when the technology you use is a lifeline to access, you’re impacted more severely by the unethical factors.

Last year, Dr. Frances Ryan @drfrancesryan covered this in her article, ‘The Missing Link: Why Disabled People Can’t Afford to #DeleteFacebook.

After the Cambridge Analytica scandal was uncovered, we had all these people encouraging each other to #DeleteFacebook.

Dr. Frances Ryan pointed out,

“I can’t help but wonder if only privileged people can afford to take a position of social media puritanism. For many, particularly people from marginalised groups, social media is a lifeline, a bridge to a new community, a root to employment, a way to tackle isolation.”

This is also echoed in the paper I mentioned earlier by Jan Fernback and Gwen Shaffer,

“First economically disadvantaged individuals, Hispanics and African Americans, are significantly more likely to rely on phones to access the internet compared to wealthier white Americans. Similarly, people of colour are heavier users of social media apps compared to white Americans. Second, mobile internet use, mobile apps and cell phones themselves, leak significantly more device-specific data compared to accessing websites on a computer. In light of these combined realities, we wanted to examine the kinds of online privacy trade-offs that disproportionately impact cell-mostly internet users; and by extension, economically disadvantaged Americans and people of colour.”

So, what they found speaks to incredible inequality.
Speaking about this paper, Gwen Shaffer explained,

“All individuals are vulnerable to security breaches and identity fraud, system errors, and hacking, but economically disadvantaged individuals who rely exclusively on their mobile phones to access the internet, are disproportionately exploited.

Unfortunately, members of disadvantaged populations are frequent targets of data profiling by retailers hoping to sell them cheap merchandise, or bait them into taking out subprime loans.

They may be charged higher insurance premiums, or find their job applications rejected.

Ultimately, the inequities they experience offline are compounded by online privacy challenges.”

But many, and I think a lot of us can really identify with this feeling, they just felt resigned to trading their privacy for access to internet.

So study participants largely seemed resigned to their status as having little power and minimal social capital.

I believe that these inequalities are likely to intersect with disabled people’s lives, particularly because disabled people already have less access to the internet.

If we read the UK’s Office for National Statistics Internet Use Report for 2019, the number of disabled adults using the internet has risen.

However, there’s still a whopping 17% difference between disabled adults and non-disabled adults.

In 2019, the proportion of recent internet users was lower for adults who were disabled, 78%, compared with those who were not disabled, 95%.

If we return briefly to that topic of privacy policies, I noted that the reading level required for the average privacy policy was higher than the education afforded most Americans.

In fact, the report suggested that 498 out of 500 of the privacy policies examined required more than 14 years of mainstream education to understand.

What they failed to note was how that might be significantly harder to access for those with difficulties reading.

This brings me to the intersection of privacy and access. It’s what I wondered about when I read the results of Web Aim’s 8th Screen Reader User survey.

They had a question in there:

How comfortable would you be with allowing websites, and thus website owners, to detect whether you’re using a screen reader?

Now, the topic of screen reader detection comes up now and again in the web community.

The response is worse. 62.5% very, or somewhat comfortable with allowing screen reader detection.

With respondents with disabilities being significantly more likely to be favourable of detection than respondents without disabilities.

The report summarises,

“These responses clearly indicate that the majority of users are comfortable with revealing their usage of assistive technologies, especially if it results in a more accessible experience.”

This is what we were discussing earlier after Léonie’s talk.

The report itself points to how this could be questionable.

To Marco Zehe’s blog post on that very question – Why screen reader detection is a bad thing, though back in 2014.

He discusses the problems that could or are even likely to occur if a website could detect whether a visitor was using a screen reader.

This year, we saw the Apple world briefly trailed offering assistive technology detection in Safari, and that was a big thing; and ended quite quickly.

Léonie herself laid out the case against screen reader detection very clearly back in 2014 – Thoughts on screen reader detection, and I paraphrase quite badly here:

“Not wanting to share personal information with the website she visits, not wanting to be segregated into text-only websites, not wanting design decisions to be based on the wrong criteria.

Wanting the focus to be on achieving screen reader accessibility through future friendly, inclusive, robust standards-based development; the good stuff.

Like the detection of any personal information, it’s invasion of your privacy.

The decision about what you choose to share is taken away from you.

The consequences are then further decisions made on your behalf without your say.”

Often when I discuss sharing personal information with people, they only really think about the value of that one distinct data point, often a sensitive data point, and not the implications of sharing that data point with others or connecting it with other information.

In Marco’s blog post, he went into this in detail, discussing not one data point of him using a screen reader.

He said,

“For one, letting a website know you’re using a screen reader means running around the web waving a red flag that shouts, ‘Here! I’m visually impaired or blind,’ at anyone who is willing to look.

It would take away the one place where we as blind people can be relatively undetected without our white cane or guide dog screaming at everybody around us that we’re blind or visually impaired, and therefore giving others a chance to treat us like true equals.

Because let’s face it: the vast majority of non-disabled people are apprehensive in one way or another when encountering a person with disability.”

Now, Marco works in tech. He’s very familiar with these things. I wondered whether that affected his perspective, so I decided to ask my brother.

As you could see, it was a lazy focus group of one person; and he’s biased because he’s my brother, and I go on about privacy all the time.

He has some knowledge of tech, and he says that me going on about it has affected his use of the web.

He has cerebral palsy, and learning difficulties. He uses a screen reader occasionally, but predominantly relies on dictation software, speech recognition software; around 90% of the time.

As a person with a neurological condition that is very visible in his physicality, he gets Marco’s perspective.

He’s frustrated by how people treat him when they know he’s disabled. Still, he has the same feelings as many of the screen reader users asked in the survey.

He said to me,

“I don’t mind if the platforms know I’m disabled if they provide me with better access; though I’d be bothered if they made it obvious to other users of the platform.”

The thing is, Sam and the 62.5% of screen reader users in that survey, should be allowed to make that choice.

As long as it’s a real choice. As long as we know who is allowed access to our information, and how much, and how often, and for how long, and for when?

Like so many issues with technology, what we’re dealing with is the underlying social and systemic issues.

As technologists, we can’t help ourselves, but we try to fix and smooth over any kind of problem using technology; but technology can’t fix issues of domination, or oppression, or discrimination, but it can make those issues worse.

We can and we do amplify and speed up systemic issues with technology.

Mike Ananny made this point recently in an article about tech platforms

“We still seem to understand, that operate with the notion that online life is somehow detached from … It’s a different life, different from our everyday existence.

And tech platforms often take advantage of that notion by suggesting that if we don’t like technology, we can just log out, or log off, or go be mindful of some other stuff like that.

The thing is, people with that kind of mindset really show how shallow they are by saying, ‘If you don’t like technology, you don’t have to use it.'”

But there’s a reason we can’t escape technology.

As Mike said, platforms of societies have intertwined people and machines.

There’s no such thing as online life versus real life. We give massive ground if we pretend that these companies are simply having an effect or an impact on some separate society.

Which brings me to another issue rife in technology today.

Given that I myself am a non-disabled person currently talking about accessibility and inclusivity, I think it’s worth me also mentioning technology colonialism. I mean, I’m also an English person…

Anjuan Simmons wrote a long and detailed article about technology colonialism five years ago on Model View Culture. He first explained what colonialism is.

“European colonialism was spurred by an interest in trade with foreign nations.

This involved the exchange of materials and products between the colonial powers and foreign nations.

Colonial powers always saw themselves as superiors over the native people whose culture was rarely recognized or respected.

The colonisers saw economic value in these foreign relations, but it was always viewed as a transaction based on inequality.”

Then, he compared it to what we so often do in technology:

“Technology companies continue with this same philosophy in how they present their own products.

These products are almost always designed by white men for a global audience with little understanding of the diverse interests of end users, particularly if we talk about the people at the top.”

So, can you tell why I think that technology colonialism is very relevant to inclusivity?

We don’t speak to users.

Instead, we use analytics to design interfaces for people we’ll never try to speak to, or ask whether they even wanted our tech in the first place.

We’ll assume we know best because we are the experts, and they are just users.

We don’t have diverse teams. We barely even try to involve people with backgrounds that are different from our own.

How often do we try to design accessible interfaces without actually involving anyone who uses assistive technology?

Obviously I’m speaking generally. I don’t necessarily assume that’s all of you in here, but of the industry as a whole.

It’s the reason why disabled activists and anti-racism activists both say, “Nothing about us without us,” because assuming you know what’s best for people with different needs from your own usually results in something that’s completely wrong and very patronising.

All of this talk of being colonial, I think it’s important for me to acknowledge my own position being a non-disabled person who advocates for accessibility and inclusivity; because I believe that we have to hold our communities to account without cantering ourselves.

That’s funny, that’s what Matt was talking about before.

I don’t know what’s best for anyone, but when I learn what’s harmful, I’m going to pay attention and try to share what I learn.

Accessibility isn’t charity or kindness. It’s a responsibility. We not only have a responsibility to design more inclusive and accessible technology, but to consider the impact our design has outside of its immediate interface.

Making our technology inclusive and accessible is not enough if the driving force behind that technology is unethical. We shouldn’t be grateful for the accessibility of unethical products.

Accessibility is only inclusive if it respects all of the rights of a person.

As the people advocating for change, we can’t exactly go around telling people to stop using this technology unless there are real ethical alternatives.

That’s where you and me come in. As people who work in technology and who create technology, we have far more power for change. We can encourage more ethical practice. We can build alternatives.

So how do we build ethical technology?

Well, as an antidote to big tech, we need to build small technology.

Everyday tools for everyday people, designed to increase human welfare, not corporate profits.

Sure, lofty goal; but there are practical ways to approach it. Let’s make a start with some best practices.

Make it easy to use. Plenty of privacy respecting tools exist for nerds to protect themselves.

I’m a bit of a nerd. I use some of them. But we mustn’t make protecting ourselves a privilege only available to those who have the knowledge, and the time, and the money.

It’s why we must make easy to use technology that is functional; this includes accessibility, convenient, and reliable.

We need to make it inclusive.

We must ensure that people have equal rights and access to the tools that we build and the communities who build them, with a particular focus on including people from traditionally marginalized groups.

Free and open technology, a lot of those nerd tools, is terrible at this.

They don’t build accessible technology, and they often surround themselves with completely toxic communities.

Don’t be colonial. Our teams must reflect the intended audience of our technology.

If we can’t build teams like this, only some of us do work in small teams, or just individuals, we must ensure that people with different needs can take what we make and specialize it, and make it work for their own needs.

We can build upon best practices and shared experiences of others, but we shouldn’t be making assumptions about what is suitable for an audience we’re not a part of.

We need to make it personal. We’ve got to stop our infatuation with growth and greed, focusing on building personal technology for everyday people, not just spending all our focus and our experience and our money on tools for start-ups and enterprises.

Next up, the architecture of the technology itself.

Make it private by default.

This bears repeating that privacy is the ability to choose what you want to show with others, and what you want to keep to yourself.

Make your technology functional without personal information.

All people to share their information for relevant functionality only with their explicit consent.

When obtaining consent, tell the person what you’re going to do with their information, and who will have access with it, and how long you’ll keep that information stored.

This is actually recently become established as a requirement under the GDPR, under the EU’s General Data Protection Regulation.

Write easy to understand privacy policies.

Don’t just copy and paste them from another site. That person probably did that as well.

Ensure the privacy policy is up to date with every update of your technology.

Don’t use third party consent frameworks. Those stupid pop-up boxes with the toggles that ask you,

‘Do you consent to this, that, and the other?’

Most of them aren’t GDPR compliant. They’re awful experiences for your visitors, and they’re likely just to get you into legal trouble.

Don’t use third party services if you can avoid them.

They present a risk to both you and your users. If you do use third party services, make it your responsibility to know their privacy policies, and what information they’re collecting, and what they’re doing with that information.

If you do use third party scripts, and content delivery networks, and videos, and images, and fonts, self-host them wherever possible. Have them on your own site.

Ask the providers if it’s unclear whether they provide an option for self-hosting.

It’s probably worth mentioning a little bit of social media etiquette.

If you know how, when sharing a URL, strip the tracking identifiers and the Google AMP junk out of them. Friends don’t let corporations track their friends.

If you feel you need a presence on social media or on blogging platforms like Medium, don’t make it the only option for someone to read what you’re saying.

Post your own site first, and then mirror those posts onto those third party platforms.

Make it zero knowledge. Zero knowledge tools have no knowledge of your information.

The technology might store a person’s information, but the people who make or host the tools cannot access that information if they wanted to.

Keep a person’s information on their device where possible.

If a person’s information needs to be synced to another device, ensure that, that information is end to end encrypted with only that person having access to decrypt it; and make it peer to peer.

Peer to peer systems usually enable people to connect directly to one another without a person, or a corporation, or a government in the middle.

Often this means communicating device to device without a server in the middle.

Make it interoperable. Interoperable systems can talk to one another using established protocols like web standards. I mean, standards don’t always mean best practice, but I think that’s a discussion for another time.

Make it easy for a person to export their information from your technology into another platform.

This is also required by GDPR.

Make it share alike.

We also have to take care with how we share our technology with what we build and how we sustain its existence.

We need to cultivate a healthy commons by using licenses that allow other people to build upon and contribute back to our work, but don’t allow big tech to come along, make use of it, shut it off.

We also have to take care with how we share our technology and how we sustain its existence. Make it non-commercial. Build stay-ups, not start-ups.

My partner at Small Technology Foundation, Aral Balkan, coined the term ‘stay-ups for the anti-start-up.’

We don’t need more tech companies aiming to fail fast or be sold as quickly as possible. We need long term sustainable technology. We need to build not for profit technology, public services.

If we’re building sustainable technology for everyday people, we need a compatible funding model, not venture capital, not equity-based investment.

So, you’re asking, “How can I do any of this?” It may feel difficult or even impossible, especially building it within your current organisation or with your current employer.

It probably is, but there are steps we can take to give ourselves the opportunity to build more ethical technology.

  1. If you can’t do it at work, do it at home. If you have the time, make a personal website, practice small technology on your own projects.
  2. Use small technology as a criteria when you’re looking for your next job. You don’t have to be at your current job forever. Nobody wants to stay working at a place where they feel like they have no agency.
  3. Developing accessibility best practices is always a good thing. If you’re currently making unethical technology accessible, that’s fine. At least you can use those skills to make accessible ethical technology in the future.

So, who builds small technology? Well according to many people I’ve spoken to, it’s luddites, rebels, and people who wear tin foil hats.

These are all comments I’ve heard from people that try to demean what I’m talking about.

I’ve been speaking about these kinds of things for about seven years now.

I’ve been heckled by a Google employee. I’ve been called a tin foil hat wearing ranter by someone from Facebook.

I’ve had so many people tell me that it’s the only way to build technology, and I’m just trying to impede the natural progress of technology.

As Rose Eveleth wrote in a recent article on Vox,

“The assertion that technology companies can’t possibly be shaped or restrained with the public’s interest in mind is to argue that they are fundamentally different from any other industry. They’re not.”

We can’t keep making poor excuses for bad practices. We must consider who we’re implicitly endorsing when we recommend their work and their products.

I’m sorry, I don’t give a jot about all the cool shit coming out of unethical companies.

You’re not examples to be held above others. Your work is hurting our world, not contributing to it.

It’s a whole approach that matters. It’s not just about how we build technology, but our approach to being part of communities that build technology.

So, you might be thinking, “Well, I’m just one tiny person,” but we’re an industry, we’re organisations, we’re communities, we’re groups made up of many people.

If more of us made an effort, we could have an impact. We have to remember that we are more than just the organization that we work for.

If you work in a big corporation that does unethical things, you probably didn’t make the decision to do that thing yourself; but I think the time has come that we can’t unquestioningly defend our employers.

We need to use our social capital. We need to be the change we want to exist, but how?

  1. Be independent. We’ve got to be comfortable being different. We can’t just follow other people’s leads when those people aren’t being good leaders. Don’t look to heroes who can let you down. Don’t be loyal to big corporations who don’t care who you are.
  2. Be the advisor. Do the research on inclusive ethical technology. Make recommendations to other people. Make it harder for them to make excuses. I mean I think just being here today, you’re doing this.
  3. Be the advocate. Marginalized people shouldn’t have to risk themselves to make change. Advocate for others. Advocate for people who are not represented.
  4. Be the questioner. Question the defaults that we have. Ask, “Why was it chosen to be built that way in the first place?” Try asking a start-up how it makes its money.
  5. Be the gatekeeper. When the advocacy isn’t getting you far enough, use your expertise to prevent unethical things from happening on your watch. You don’t have to deploy a website.
  6. Be difficult. Be the person who is always known for bringing up the issue. Embrace the awkwardness that comes with that power. Call out questionable behaviour.
  7. Be unprofessional. Don’t let anybody tell you that standing up for the needs of yourself or the needs of others is unprofessional. Don’t let people tell you to be quiet, or that you’ll get things done if you’re a bit nicer, or if you smile a bit more, or if you’re a man.
  8. Be the supporter. If you’re not comfortable speaking up for yourself, at least be there for those who do. Remember that silence is complicity. Because speaking up is risky. We’re often having to fight entities that are far bigger than ourselves. We have our lives and the way that we make money that are at risk, but letting technology continue in this way is way riskier.

Someone came up to me after the talk I gave a couple of months ago and referred to me as –

“The woman who comes and tells people to eat their vegetables.”

The thing is, I’m not your mother. I’m just here because I want to tell you that we deserve better.

You can find my slides at Notist.

Thank you.