Virtue ethics versus algocracy: round 1

I was cooking while listening to some not so random music with my partner on her phone when, before the next song, comes an advertising. Our hands were wet, so we had to wait and hear about the outstanding work done by a charity helping poor children. Of course, one should never use advertising to evaluate the impact of a charity. What I could evaluate was that the ads filtering algorithm agrees with me about how much of a nice person my partner is. The occasion motivated the following reflection.

Still on the anecdotal stage, I remembered how one of my friends used her social media to complain about a misguided advertisement she received. She was a woman in her thirties and started receiving ads about marriage related products. She was outraged! However, if we stop to consider, these cases are not the most threatening. After all, they are not only easy to perceive but also, and more importantly, easy to resist. The real threat occurs when the algorithm is right, so right, that we are unable to make a sharp distinction between the recommendation and our choice. This should be more likely to outrage us because it might constitute a violation of autonomy. In the most serious cases, the recommendation algorithms may take advantage of knowing our weaknesses and use it to nudge us into adopting certain self-detrimental behaviour. Welcome to the algocracy era.

Algocracy, or rule by algorithms, is a term used to refer to a dystopian or maybe utopian but certainly not far-fetched scenario in which algorithms would play the central role in the most central instances of human decision-making. The discussions tend to revolve around decisions related to governments. As the examples above give an idea, my use here will be much more restricted. I want to talk about our individual daily choices. In these cases, computer coded algorithms already have a relevant role in guiding our decisions, for the better or the worse. Governments and enterprises appear only at the other end of the line, that is, they are the ones implementing the algorithms which collect our data and may guide our choices.

Given the delimitation, the relevant types of algorithms will be something in the lines of:

Content filtering: when an algorithm gets the data of your behaviour and filter what it will show to you based on that. My partner clicked in a lot of charity-related content and, thus, these ads started appearing to her.

Collaborative filtering: when an algorithm gets the data from your behaviour, your friends and similar people to choose what it will show you based on that wide range of crossed information. My partner’s clicks and those of her circle of friends would have matched the profile to which a charity would like to target.

The transition between these two cases offers a simple example of a trend in machine learning. The more sophisticated the algorithms become, the less we know about methods the algorithms are using. And the ‘we’ include the people who make the code of the algorithm. We only know is that they are working more and more effectively. To use a last, and more dramatic, example we can think of k-means clustering. In this technique, the machine learning program receives raw data, divides it into clusters that make no sense for human beings and the division allows them to make predictions that have a higher rate of success than any human analyst. K-means cluster has applications from customer segmentation for marketing campaigns to predicting and preventing cybernetic crimes.

The opacity of operation of these algorithms is already big enough to hinder human intervention at the procedural level. Thinkers like Morozov claim that this opacity threatens human autonomy and thus we should do what we can while we are still able to stop them. On the other hand, there are (more instrumentally led) thinkers who think that if the results are good, it would be better to have algorithms interfering in our decisions. Some of them are prudent. Agar, for instance, defends a modest accommodation in which we should use technology to enhance our human capacities but making sure that we keep the improvements within the human range. In our more restricted context, the problem can be put in terms of the risk of losing the freedom to choose versus gaining the tools to optimize our picks beyond what we would do in normal conditions.

Economists traditionally model freedom of choice as being able to make the right choice. The view leads to a curious implication. Take two sets of choices A:{r, s, t, u, v} and B:{r} in which r is the right choice. According to the economical model, both sets have the same degree of freedom of choice. Now, in this model, if algorithms interfere in our decision-making to help us optimize our choices, there would be no reduction of freedom. However, the definition of freedom of choice is far from intuitive (some economists have worked on alternatives that accommodate our intuition that more choices increase the freedom of choice).

Moreover, in our context, there is a lot of potential for conflicting interests in determining which is the right choice. For instance, the enterprises who are those who put the algorithms to work will benefit if they make the other end of the relation — the customers being fed by algorithms — choose to increase their consumption. Of course, it is not a case of simple conflict. The enterprises may maximize their profit and maximize the well-being of the clients. However, since profiting is the ultimate goal, there will always be a tendency to maximize profit over well-being. Thus, there is at least a prerogative for them to explore our consuming weaknesses.

Some people think we should take autonomy as a value in itself. Such a view can lead to a radically opposite conception of freedom of choice in which freedom is the right to make the wrong choice, including when it is detrimental for the person who chooses. Some projects of rebellion against algocracy adopt guerrilla or luddite-like strategies which seem to go in this direction. For instance, clicking on aleatory links and ads would add noise to the data the algorithms are mining. I am not aware if this sort of rebellion is effective because outliers are easily avoided by statistical models. However, even if effective, much like earning the right to make the wrong choice, this approach does not seem appealing.

To find a plausible alternative it is necessary to start with a reasonable conception of freedom of choice. I would say that such a conception has two desiderata: we want to have choices, but we also want an objectively reliable and yet subjectively relatable criterion for what is a right choice. To satisfy these conditions, I believe that a distinction between an agent’s first and second-order desires might put us on the right track.

First-order desires are our unconscious immediate desires, including those who motivate our weaknesses of consumption. Second-order desires encompass the desires that we think we should desire as self-conscious agents. We feel the conflict between these levels when we are watching comedy stand-ups while thinking on the back of our mind that we should be reading Tolstoy. Frankfurt, who popularized the distinction, claims that our self is our second-order desires. People who constantly fall for their immediate desires would be a hostage of their first-order desires. Here we do not need such a strong view. For us, the second-order desires will only provide an objective and yet relatable normative criterion to ground our autonomy. A right choice will be thought of as the choice the agent wished they would have made if they were their ideal version under ideal circumstances.

Algorithms do not distinguish between first and second-order desires, of course (or maybe I should say probably since how they operate is opaque to us). At first, it seems that they can take advantage of both. Thus, they can sell you an unhealthy treat in a moment of exhaustion and also a hyper-intellectual book that you wish you would read in a moment of grandiosity, but is unlikely to do. However, since the algorithm gathers information from what impels you, your friends, and people somehow similar to you to act in immediate circumstances, chances are it will profile your behaviour based more on your first-order than second-order desires.

It is also true that an algorithm may use information from a third-party, say the science of healthy living, and offer you what is best for humans. These offers seem to be on your best interest, even if they are not your second-order desires. Finally, we can envisage an algorithm that allows you to feed it with some self-conscious input. Say you want to become a vegetarian and thus is able to tell that to the recommendation algorithm. By filtering the suggestions the algorithm would help you to become what you want. This would be the case of quantified tracking of smartwatches helping one’s training.

Given these scenarios, here is the dilemma:

If you make wrong choices on purpose to exert your freedom of being wrong, the algorithm may fail to explore your first-order desires, but neither it will be helpful. If you are honest and acts naturally, the algorithm will have all the information to make you make what it wants. It may be good for you in several cases, but certainly will be better for the enterprises. The passivity of choices that follows will definitely threaten your autonomy, even in the scenario of an algorithm helping you implementing your second-order desires. What is the best way to deal with this situation?

Virtue ethics is an agent-based approach to ethics. In opposition to consequentialism or deontologism which are action-based, virtue ethics does not ask what actions are right or wrong given an objective criterion but rather how to make an agent that will make the right actions. To do so, it adopts a performative principle. People become good by doing good actions. To find out what is a good action, it focuses on a model-based approach. The good action is what the virtuous agent would do in that situation. We can adapt this approach in light of our second-order desires view of autonomy. The good action is what the agent would do if they were the ideal version of themselves. Thus, it is by acting as you want to be that you become what you want to be.

Now we can see why becoming a vegetarian relying on the algorithm falls short of making you actualize your second-order desires. In a virtue ethics approach, you would not become a vegetarian by following the restrained recommendations, after all, the acts of refraining would not qualify as your actions. You would easily eat meat again if the algorithm (or another agent) gives you the opportunity.

The tactic of resistance that I want to propose follows the autonomy based on second-order desires coupled with a virtue ethics approach. As thus, resisting is as simple and as hard-work as living a well-lived life. It only re1uires you to do the right choices-actions in spite of the algorithm, which is hard without its interference and it becomes harder with it. But being hard is not an objection, after all, this is almost a condition of the notion of resistance. In practical terms, you should start clicking links and looking for the contents that your reflective self think you should. If your virtual self acts in tune with your second-order desires, algorithms will always have limited power on your decisions. If they try to explore your first-order desires, the algorithms will fail, and, more importantly, they will have to learn with the failure. If algorithms try to take advantage of your second-order desires, then you will find an ally in fulfilling your desires.

This type of resistance means that not only you will become immune to the attempts of the algorithms to seduce you based in first-order desires, but also that you will feed the algorithms so that they will build more virtuous models of human beings. Their predictions and recommendations would become more in tune with what we think we should do rather than with what we usually and disappointingly do. There is even a social effect since your actions will affect the recommendations the algorithms do to your friends and people that are similar to you. In the most (and maybe too much) optimistic scenario, it could lead to a virtuous cycle of better algorithms and better versions of ourselves.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store