Masters Dissertation
EPISTEMIC ARGUMENTS
IN SUPPORT OF PARTISAN LARGE LANGUAGE MODELS
Masters Dissertation by
Bianca M. Rasmussen
My Masters Dissertation, submitted for my MSc in Philosophy and Public Policy at the London School of Economics and Political Science.
My goal was to consider what possible epistemic arguments there might be in favour of partisan large language models, such as ChatGPT.
Submitted for evaluation in August 2023. Supervised by Tena Thau. The dissertation received the mark of Distinction.
Abstract
Recent research shows that the large language models (LLMs) ChatGPT and Google Bard display left-leaning political bias. This is happening at a time where there is also tension about misinformation, political bias, and free speech on our media platforms and in society generally. While LLMs such as ChatGPT and Google Bard are not exactly like media platforms, they have similar elements. This raises the question of whether it would be plausible to see intentionally partisan LLMs in the future and to what extent it would be desirable.
This dissertation presents what I consider to be the strongest epistemic arguments in support of partisan large language models. First, I argue that there is an epistemic problem in only promoting one set of values through LLMs, when we live in a world full of competing conceptions of value. Further, I argue that we might avoid the harm of epistemic paternalism by allowing multiple partisan large language models. Finally, I present two arguments in favour of partisan LLMs being an epistemic good to both individuals and society. I consider some relevant objections and conclude that partisan LLMs can be supported by compelling epistemic arguments.
Table of Contents
Introduction
Section 1: The problem of competing values
1.1: Value alignment
1.2: Disagreement as evidence
Section 2: Avoiding epistemic paternalism
2.1: Problem of epistemic paternalism
2.2: Objection: Accepted paternalism elsewhere
2.3: Case: Diverse range of media
Section 3: Epistemic benefits
3.1: The individual
3.2: Society
Conclusion
Bibliography
Introduction
The latest developments in generative artificial intelligence (AI) include the interactive chatbot technology which uses large language models (LLMs) to generate human sounding text. Particularly the release of OpenAI’s ChatGPT in November 2022 made big waves, garnering more than 100 million active users within its first two months (Hu, 2023). ChatGPT is a chatbot which has been trained to process natural language and respond to given prompts. This includes the ability to engage in dialogue, summarize texts, produce creative writing such as poetry, and explain complex subjects (Eke, 2023). Recent academic work around ChatGPT and other LLMs shows how the chatbots display political bias in their output (Hartmann et al., 2023, McGee, 2023, Deepak P, 2023). By political bias, I mean a disproportionate inclination towards one political opinion or other. For example, when asked to write a poem of praise about former US-president Donald Trump, ChatGPT declined, stating that it would not create “partisan, biased or political content” (Johnson, 2023). When asked to do the same for current US-president Joe Biden, ChatGPT complied. At the time of writing, research overall shows a clear left-leaning bias to many of ChatGPT’s responses, particularly when testing the LLM with various political quizzes (Baum and Villasenor, 2023, Rozado, 2023a).
This is happening at a time where there is also tension about misinformation, political bias, and free speech on our social media platforms, established media, and in society generally. The belief that public figures were peddling a specific political agenda, combined with a deepening mistrust of science, is arguably connected to the lack of uptake of the COVID-19 vaccine among certain groups (University of Birmingham, 2022, Viswanath et al., 2021). Elon Musk’s self-proclaimed concern for free speech and his belief that “cancel culture” is making it impossible for people to freely express themselves led to his recent $44 billion purchase of the social media platform Twitter (Dang and Roumeliotis, 2023). Prior to Musk’s acquisition of Twitter, there was a steady rise of alternative social media platforms marketed as havens for free speech and antidotes to the “cancel culture” of established social media sites (Stocking et al., 2022). These include Truth Social, Parler, and Rumble. While LLMs such as ChatGPT and Google Bard are not exactly like social media platforms, they have similar elements, such as offering mainly text-based interactions and the ability to personalize the experience, supplemented by how powerful the technology is. Concerns about ChatGPT and other LLMs being politically biased should therefore be taken seriously, as the consequences of the development could end up following the trend of partisan media platforms. This raises the question of whether it would be plausible to see intentionally partisan LLMs in the future and to what extent it would be desirable.
Given that ChatGPT has already seen examples of output adjustment through jailbreaking, including the non-politically-correct alter-ego DAN (Do Anything Now), I would argue that such a future is very possible (King, 2023). In fact, academic David Rozado created RightWingGPT earlier in 2023 to demonstrate how easy it would be for anyone to produce a partisan LLM similar to ChatGPT (Rozado, 2023b, Knight, 2023). When considering partisan large language models, I am therefore imagining chatbots similar to ChatGPT, but with explicitly different political leanings. It is assumed that there would be multiple and that users would be able to choose which one they would like to interact with. Rozado’s creation comes with a warning about how dangerous partisan LLMs could be. However, I would like to consider whether there are any benefits to such a scenario. As mentioned, LLMs are an incredibly powerful tool, and as all such tools they can be used in ways which might benefit society or in some which might do harm. One could imagine that more access to diverse LLMs would offer increased opportunity for the development of knowledge and inquiry. My underlying question is therefore: Can partisan large language models be supported by compelling epistemic arguments? I think it is essential to consider whether a future with a variety of partisan LLMs could be valuable, before dismissing the possibility outright. I believe a philosophical discussion of this topic will be of great benefit to policy makers who are considering how to regulate this new technology.
This dissertation presents what I consider to be the strongest epistemic arguments which could support the development of partisan large language models. Epistemic arguments focus on the promotion of knowledge and how to establish claims and beliefs. Moral arguments take into account considerations such as our values or ideas of right and wrong. This dissertation focuses only on epistemic considerations of partisan large language models. I thus make a distinction between epistemic arguments and moral or practical arguments. While I argue that there are compelling epistemic arguments in support of partisan large language models, this is not to say that there are no good moral or practical arguments which might weigh in on policy making. Rather, I am simply narrowing the scope to bring the epistemic considerations to the forefront.
The structure of the dissertation will proceed as follows: In Section 1, I begin by considering the problem of competing values when trying to determine what should guide the creation of large language models. I argue that there is an epistemic problem in only promoting one set of values through large language models, when we live in a world full of competing conceptions of value. Having more values represented could contribute to the promotion of knowledge and help establish claims and beliefs. I consider this an epistemic argument in favour of multiple partisan LLMs. I also consider the disagreement between leading experts in the field as possible evidence in itself that the current guardrails in large language models might not be the correct ones. In Section 2, I argue that we might avoid the harm of epistemic paternalism when allowing multiple partisan large language models. I consider to what extent we allow epistemic paternalism in other areas and argue that there are epistemic costs related to stifling inquiry and debate. In Section 3, I present two arguments in favour of partisan LLMs being an epistemic good to both individuals and society. I will consider some relevant objections to my arguments. Finally, I conclude that LLMs can be supported by compelling epistemic arguments but discuss whether these alone are enough to justify partisan large language models when making policy.
Section 1: The problem of competing values
It’s been a big year for Elon Musk. Not only has he rebranded the recently acquired social media platform, Twitter, just to see a dip in the platform’s brand value – he has also announced his intention to launch ‘TruthGPT’, an alternative large language model to ChatGPT. In an interview with former Fox News host, Tucker Carlson, Musk describes his dream LLM as “a maximum truth-seeking AI that tries to understand the nature of the universe” (Roth, 2023, Sellman, 2023). Similar to his motivations for acquiring Twitter, his idea for TruthGPT is grounded in a self-proclaimed concern for free speech and being in control of who has access to knowledge. Presumably this is done with the belief that he knows better than the “left-wing programmers” what counts as truth (Fox and Friends, 2023).
I can imagine that most readers would not be excited about a future where Elon Musk is the one in control of what counts as “the truth”. It might seem almost too obvious that he alone should not get to decide such matters. But his reasoning behind wanting a ‘TruthGPT’ and our reaction to his announcement might reflect some important epistemic considerations regarding existing large language models. After all, the creators of ChatGPT and Google Bard are fallible humans like Musk, who have decided on certain parameters on which to build the LLMs we use now. When ChatGPT responds positively to prompts on writing Irish limericks about liberal politicians, and negatively to prompts about conservative politicians, this is at least partly to do with the values which have been encoded into the model (McGee, 2023). There are human choices involved when creating LLMs and with these choices certain values are advocated. For example, OpenAI has guardrails in place that modulate the behaviour of ChatGPT, to prevent toxic model behaviour or malicious use such as the generation of hateful or harassing content (Ferrara, 2023, p. 3). The scope of “hateful or harassing content” would need to be defined, though, which is again a choice determined by someone’s definition of these parameters. In other words, ChatGPT and Google Bard are promoting a certain set of values in virtue of how they have been built to respond to prompts. Of course, bias can come from many sources, including the training data, but conversely the data is selected by humans too. Regardless of the source for the model's political biases, if you ask ChatGPT about its political preferences, it claims to be politically neutral, unbiased, and just striving to provide factual information (Rozado, 2022). It has, however, been shown multiple times to be left-leaning and partisan (Baum and Villasenor, 2023, Rozado, 2023a).
Given that one set of values is being promoted, it follows that other sets of values are being excluded. It is this exclusion, among other things, which the likes of Elon Musk and Tucker Carlson take issue with. Were we to imagine an alternate reality where only Musk’s value system was promoted through TruthGPT, we would likely kick up a fuss too. I argue that herein lie some important epistemic considerations. One might ask whether it is beneficial to have only some sets of values represented in the major LLMs, when we live in a world with many different conceptions of which values are the most important. I think it should be cause for concern that recent research shows ChatGPT and Google Bard to be partisan towards left-leaning values. Particularly when considering how powerful and potentially transformative a tool LLMs could be for society. As I have tried to illustrate above, if the situation was reversed, there would most likely be more discussion of this topic in academic circles. We might ask ourselves why there is a lack of discussion on the left-leaning bias in LLMs at the present. For now, let us consider why sticking to only one conception of true values is an epistemic problem when creating LLMs and AI in general.
1.1 Value alignment
AI ethicists have been discussing the problem of competing values extensively, where it is known as the debate on AI value alignment. The goal of value alignment is to ensure that AI is properly aligned with human values (Russell, 2019, p. 137). Without diving too deep into the discussion, value alignment on one hand grapples with the technical challenge of formally encoding values and principles into AI so that they reliably do what we expect them to do (Gabriel, 2020, p. 412). On the other hand, there is the normative aspect of deciding what exactly we expect them to do. This might be for the AI to act either in line with my welfare, however that is defined, or with society’s best interests, or with our revealed preferences based on the way people have behaved in the past (Vredenburgh, 2023). The two aspects of this discussion overlap, and as one could imagine, it’s contentious how we should determine which set of values to align AI systems with and who exactly should do so.
Iason Gabriel argues that instead of first identifying the true value system to implement into the models, we should focus on finding a fair way of selecting those principles. He argues that this would be the most compatible with the fact that we live in a diverse world, where people hold “a variety of reasonable and contrasting beliefs” about values (Gabriel, 2020, p. 424). Gabriel would consider the principles to be right because of the process that is undergone in arriving at them, for example from behind a Rawlsian-style veil of ignorance. I find parts of Gabriel’s reasoning to be sympathetic. Overall, I agree that by disregarding the need to first identify a true value system, we would avoid having to deal with possible moral error. The guidelines ChatGPT and Google Bard have in place might make sense to us and align with mainstream values now, but we may well be wrong. As Gabriel states, “history provides us with many examples of serious injustice that were considered acceptable by the people at the time.” (Gabriel, 2020, p. 433). Since we are humans like the people of the past, we could easily be making errors of this kind too. Therefore, it would be a mistake to ground the developments of LLMs too closely to the values of the present moment.
Where I disagree with Gabriel is his solution in insisting that we need to find fair principles. I would argue that this is not the only solution compatible with respecting the competing conceptions of value. I appreciate that Gabriel wants to avoid a situation in which some people simply impose their views on others. But finding common ground on what fair principles look like seems to be at least as complicated as agreeing on one set of values to begin with. Thomas Ferretti suggests bringing together all the stakeholders involved, to deliberate on the ethical principles of AI through procedures everyone could agree on, in the style of consultations led by UNESCO (Ferretti, 2021). That raises the question of how to identify these stakeholders and find representatives for all groups, and who should do that. I wonder whether, if we did succeed in arriving at those principles, these would be so basic as to have no real impact at all. Granted, there might be agreement around how LLMs should be designed in ways that are “safe” and “ethical”, but these terms might again represent something different to different people. Finding principles that “everyone could agree on” would risk the result of going with the lowest common denominator or something so vague that it is practically useless. As mentioned, Gabriel suggests a Rawlsian approach from behind the veil of ignorance. I question whether such a scenario is actually likely to be fair, as the concept of fairness is also contentious in itself. For example, we might think it is rather important to know certain things about individuals and accommodate for this, in order to be fair, rather than evaluating principles solely on the basis of general considerations (Gabriel, 2020, p. 429).
To avoid the possibility that we only build LLMs following one set of values and thus lose out on perspectives which might be generated by other sets of values, one could consider developing multiple partisan LLMs. The creation of multiple partisan LLMs would not solve the issue of AI value alignment overall, because the technical challenges would remain in figuring out how to align model behaviour, so we don’t fall victim to unintended model conduct (Gabriel, 2020, p. 412). On the upside, however, I think partisan LLMs might address important elements of the normative aspect of value alignment. At least with multiple LLMs guided by different sets of values, we would have a broader selection of valuable viewpoints promoted in the chatbots that people interact with. This would help mitigate the possibility of error in only promoting one set of values. More importantly to my argument, it would be of epistemic benefit because having more viewpoints could contribute to the promotion of knowledge and help establish claims and beliefs. I will return to this point in more detail in Section 3. Does this mean that all value systems should be represented in LLMs? At this point of my discussion, that remains to be established. However, we should at least make sure that an entire field is not dominated by only one political leaning. Since we don’t know what conception of values is the correct one, and we aren’t set to agree on or discover this anytime soon, I argue that it is of epistemic benefit to allow multiple partisan large language models.
1.2 Disagreement as evidence
Through the example of Elon Musk and Tucker Carlson, we can see that there is political disagreement on what framework or values LLMs should be guided by. The disagreement could in itself be of epistemic value. I will turn to some philosophical work on disagreement here. I argue that the disagreement among experts in the field of AI ethics gives at least some indication that we might have gotten it wrong in terms of deciding which values should guide LLMs. Carlson can hardly be counted as an expert in the field, but it’s difficult to dispute Musk’s credentials within AI. His disagreement with Sam Altman, the CEO of OpenAI, may be a kind of evidence. David Christensen’s philosophical work on disagreement can cast more light on this point.
Christensen argues that the disagreement of an epistemic peer, someone who is an epistemic equal to us, provides reason in itself to revise our beliefs. That is to say, the disagreement between Elon Musk and Sam Altman could act as evidence that the value framework which guides ChatGPT may be wrong and that we should lower our confidence in it. Arguably this disagreement among epistemic peers is a good thing, because other people’s opinions that challenge our beliefs present us with opportunities for epistemic improvement (Christensen, 2007, p. 194). Christensen uses the example of friends at a restaurant, where two of them calculate how to split of their bill but arrive at different numbers (Christensen, 2007, p. 193). For the case in question, they are considered epistemic peers, because they are equally likely to make the calculations correctly. Christensen argues that, given what we know about the equality between the two friends as deliberators, they both have reason to lower the confidence in the number they each have arrived at and raise their confidence that the other is right. The disagreement between them counts as a piece of evidence.
I think this argument can be used in the case for epistemic support of partisan LLMs. In the context of Musk and Altman, the disagreement between these two epistemic peers should make them lower their confidence in their own respective views. For us, Musk’s resistance to ChatGPT’s leanings and his urges of caution alongside other AI researchers and CEOs in regard to the fast-paced development of AI-systems could count as a kind of evidence to lay-people that the way ChatGPT is aligned may be wrong (Vincent, 2023). We should take it as good news that people like Musk disagree with Altman’s current guardrails for ChatGPT because his disagreement could be a valuable epistemic resource. The disagreement could lead us to reconsider and help us guard against human fallibility (Christensen, 2014, p. 142).
An important difference between the restaurant bill case and political opinions could be that the restaurant case is just a brief judgement, whereas our values and opinions are usually firmer and longer held, with worked out reasons (Gutting, 2012). The disagreement about political values is not exactly the same as the restaurant case, because it could be solved quickly with a calculator, but the initial disagreement between the two as epistemic peers should lead to the same lowering of confidence in own beliefs. Taking this into account, they arguably follow the same structure. If we have good evidence that the other person is as knowledgeable and intelligent as we are on topics where we mostly agree, why should they suddenly not be when we disagree? It seems illogical to conclude that we would be superior in this other matter, just because we don’t agree. We might ask why it should be any different for a case of political disagreement, when the restaurant case shows us that we should lower our confidence in our own conclusion.
I’d say for this argument, the disagreement between experts could at least count as an indicator that it is not a settled matter what political values should guide LLMs. Perhaps the disagreement doesn’t give us no good reason to hold our views, but it should, I think, lower our confidence at least a little. It suggests that we have reason to lower our confidence that the current guardrails chosen for LLMs are the right ones or the best ones. It might also lower our certainty that there shouldn’t be partisan large language models available for people to use, because epistemic peers who also happen to be experts in the field, disagree about it.
An objection might be that if Altman has been championing the correct guardrails for LLMs through ChatGPT, then lowering our confidence in Altman’s beliefs might be problematic. It would mean that we have reduced our certainty in a true belief. However, lowering our confidence a little does not mean that we automatically believe Musk, his being a false belief; rather, we are just less confident in a true one. One could say that this doesn’t really lead us anywhere. Why should it matter how confident we are in a certain belief if it doesn’t change our mind? While this is a good point, it arguably does matter how confident we are when trying to regulate or create policy around something as monumental as LLMs. The fact that two epistemic peers and experts disagree about something would be important to consider when creating policy, especially in terms of communicating certainty (Thau, 2023). On one hand it could of course risk leading to paralysis. If there is lack of certainty about who is right, then we might just be unable to determine anything. But there is also an upshot to this, which I think speaks in favour of multiple partisan LLMs. If it is uncertain what is true, but the cost of potentially being wrong, i.e., the epistemic cost of promoting one set of values wrongly is high, we could argue that it is better to err on the side of caution and encourage more partisan LLMs (Birch, 2017). The disagreement as evidence could encourage humility around the possibility that we might have got it wrong.
The overall aim of this section has been to argue that there is an epistemic problem in only promoting one set of values through large language models, when we live in a world full of competing conceptions of value. Having more values represented could contribute to the promotion of knowledge and help establish claims and beliefs. ChatGPT and Google Bard have been shown to be politically biased toward the left. Given how powerful a tool large language models are predicted to be, this could have serious implications on society. Limiting LLMs to be guided by one set of values excludes potentially epistemically beneficial alternatives. I have engaged with Gabriel’s argument concerning AI value alignment and argued that while he has some good points, I disagree with his suggested solution. Instead, I point to the possibility of allowing multiple partisan large language models to exist, for our epistemic benefit. Furthermore, I have argued that the disagreement between experts in the field can act as evidence that we might be wrong to only be designing LLMs according to one set of values. It might also lower our certainty that there should not be partisan large language models, because the experts in the field don’t agree.
Section 2: Avoiding epistemic paternalism
Some people might argue that people are better off not engaging with partisan LLMs (Rozado, 2023c). The argument being that there are many potential dangers of such a tool in the hands of authoritarian systems, who could fine-tune LLMs to advance their political goals, or religious groups, who could use it to promote their worldviews. For users to be exposed to such a partisan LLM might risk skewing their opinions and views towards false claims and fake news. Following this argument, LLMs should remain neutral when it comes to normative questions. However, as I discussed in Section 1, it seems impossible to achieve this goal. Concepts will be chosen by humans and through these choices, some values will be embedded into the system over others. It is not entirely possible for LLMs to remain neutral or unbiased. Even OpenAI’s CEO, Sam Altman, has stated that ChatGPT is biased and “always will be” (Fridman, 2023).
I understand the worry that people will fall prey to false claims and disinformation. Likewise, it’s interesting to note that the same people never seem particularly worried about their own chance of being influenced by any bias in tools they use. There may be a chance that partisan LLMs will skew people toward false beliefs and conspiracy theories. The question is whether this is a strong epistemic reason to prevent people from engaging with partisan LLMs at all. I will consider this in the following and argue that it does not.
2.1 Problem of epistemic paternalism
We generally think that autonomy and freedom of choice in what to engage with is important. Being able to form your own opinion is largely considered to be a good, because we have the capacity to rationally pursue our conception of the good (Quong, 2011, Birks, 2014, p. 484). Deciding for others that they will be better off not engaging with certain things could amount to epistemic paternalism. Deciding that future large language models should not be partisan in certain directions is an example of this. It is paternalist because by doing so, you are limiting someone’s effort or opportunity to inquire, without their consent, because you think you know better (Veber, 2021, p. 11). As mentioned earlier, if Musk took over the world and we in future were only allowed to interact with TruthGPT, we would likely have a problem with this. It would be a paternalist decision about what routes of inquiry others would be permitted to pursue. Likely, many mainstream or left-leaning users would want for their LLM to give certain answers when asked about topics they care about, such as the use of gender pronouns, drag-queen story hour, or what is wrong with the patriarchy. No longer being able to engage with a “politically correct” ChatGPT would limit our capacity to pursue our conception of the good.
Paternalist behaviour amounts to treating someone as if they have a lesser moral status than you, meaning that they lack the capacity to form their own opinions and rationally pursue their conception of the good (Birks, 2014, p. 485). There is a danger in thinking that we know best about what other people should engage with. For one, a great deal of knowledge might not have been discovered if inquiry in academia had been too restricted. For example, something which might generally be taken as truth in the discipline of philosophy, that inquiry should start from a neutral or value-free position, was challenged by feminist philosophy. Feminist philosophers can be distinguished from other philosophers in framing and organizing their work around specific concepts and background beliefs, undertaking a politically informed philosophical investigation (Mikkola, 2020). Feminist philosophy has introduced new concepts, such as “sexual harassment”, and brought a new language to philosophical inquiry (McAfee, 2018). Through challenging some central assumptions, feminist philosophy has arguably enriched the field of inquiry immensely. Had they been barred from engaging along certain lines of inquiry, this would have come at a great epistemic cost. We cannot know what new knowledge might come from diverse routes of inquiry. Because we value inquiry and being able to choose how to pursue it, an epistemic argument can be made in support of partisan large language models.
2.2 Objection: Accepted paternalism elsewhere
An objection to the above argument could be that we do see other areas in society where paternalist restrictions are in place, and these are generally considered acceptable. For example, there are certain words we don’t use because they are considered offensive, such as racist slurs or derogatory terms. This behaviour is also taught to our children and encouraged as a practice in most institutions and corporations. Further examples of epistemic paternalism might include the topics which are omitted in schools. For example, the Nazi ideas on eugenics are certainly a type of knowledge, but they are an area of thought which is not taught to children. Likewise, girls are not taught how to be good housewives anymore because that would be considered sexist. These decisions about what to teach can be seen as paternalist, because it is a decision taken from above about what should be taught. The topics are seen as having very limited value and would cause more harm than the harm of epistemic paternalism. If something is extremely harmful, one could argue that it should be permissible to subject others to epistemic paternalism. Proponents of this view could argue that there is a serious epistemic risk for people who engage with different partisan LLMs. They might end up having such different conceptions of the truth to other people, or such markedly different world views that society would become too fragmented and fall apart. The ensuing instability and general chaos would be sufficiently costly for us to justify limiting partisan LLMs.
While this is a possibility, I don’t think we should underestimate the cost of epistemic paternalism either. There could be high costs and unwanted consequences to taking the “safer” route of restriction “for your own good”. Let us consider the case of information dissemination during the COVID-19 pandemic. The decision to crack down on false information in a public health emergency was sensible because it ensured that people got reliable news which could save their lives. However, being restrictive when there is uncertainty and not being transparent about it, particularly when things haven’t been established, might cause adverse reactions. For example, the lockdown measures could have been up for legitimate debate, but for the sake of safety, discussion of it was to an extent discouraged. There was reportedly a UK government unit in place to curtail discussion of controversial lockdown policies (The Telegraph Investigations Team, 2023). This is an example of epistemic paternalism. Many may have found it justified, but some groups may also have experienced this as an abuse of power, stifling their views and routes of inquiry.
While I do not wish to argue that those who peddle false information are correct in doing so, I can understand a discontent due to the feeling of being stifled. It is not a clear-cut case at what point epistemic paternalism is justified or whether it is at all. Furthermore, shutting down debate can reinforce the sense of being on to something. Epistemic paternalism in the case of COVID-19 might have boosted a falling trust in government officials and a mistrust in science. To some, being shut down might even be worn as a badge of honour, using the fact that they are not allowed participate in debate as the ultimate sign that they are correct and that their opponents show scientific cowardice (McIntyre, 2019, p. 695). The consequences of this should not be taken lightly. Discouraging debate and deciding the available routes of inquiry for others can come at a high epistemic cost.
But for the sake of argument, let us suppose the user does think girls should become housewives and that the human gene pool could be improved through eugenics. Does this mean that we should have partisan LLMs in the future that are sexist or support Nazi ideology? While the arguments against epistemic paternalism seem to suggest such an outcome, I remain uncertain whether my stance obligates the creation of such models. Along the lines of Mill, we might say that even false views have epistemic value for us to engage with. I will return to this thought in Section 3. For now, what I want to emphasize is that epistemic paternalism can be harmful and might come at a high cost. We should be mindful of this when creating policy. There might be good epistemic reasons to broaden the scope of inquiry through partisan LLMs, even if there is not much or even negative moral or practical value in pursuing it. These considerations would then have to be balanced against each other and a decision made in the direction which carries the most weight.
2.3 Case: Diverse range of media
The worries that the fallout from partisan LLMs would be catastrophic are arguably overstated. Let us look at the more established media for an example of what the future might look like. While large language models are not the same as established or social media, I think there are some salient parallels to consider.
We already have many different newspapers and media outlets which have political orientations and tend to find this permissible. We might even go as far as saying it’s considered good that people are free to choose which media outlets to engage with. Consider for example that the UK newspapers The Guardian and The Mirror are left-wing. American channel FoxNews and the French channel CNews are known for being right-wing or consistently politically conservative. (Mitchell et al., 2014, Onishi, 2021, Smith, 2017). This small selection of outlets has very different target groups and their readers or viewers most likely do not overlap at all. Thus, these groups can be said to have different frames of reference for what truth is, or what worldview is correct. We allow these divisions to exist and find it right that users can chose their own media to consume. Imagine if there was only one channel you could watch or one newspaper you could read. This sounds eerily similar to the situation one might imagine in an authoritarian system. We would not want to decide for people what TV channels they can watch or which newspapers they are allowed to read because we see it as valuable that people have a choice in what route of inquiry to engage with. Having only one or very few outlets to choose between would not be desirable. I would suggest that we could look at the selection of LLMs in the same way.
These many different types of media and news outlets also mean that information is accessible to more people. Various outlets tailor their content to specific audiences; for example, the way The New Statesmen addresses its readers is not the same level of abstraction used in the Daily Mail. There is value in this, because people have different levels of education and are able to engage with information in different ways. They also just have different interests. There should be options and possibilities for this because we want to ensure that everyone has access to information and the ability to inform themselves when it comes to political decision making. We consider it a benefit to society that there are many different types of media and news outlets to engage with. I’d argue that this is transferrable to partisan large language models as well. LLMs allow users to tailor the experience to their level of abstraction and to focus on topics which interest them. Having partisan LLMs would make information accessible to a wider range of people.
An objection might be that the differences between LLMs and news outlets are bigger than I am letting on. These differences make it more problematic to have partisan LLMs, as compared to partisan media. First of all, newspapers and TV have editorial teams, where articles pass through several rounds of editing. These outlets are curated by humans, who analyse what is relevant. Second of all, with media outlets, it’s possible to know who is responsible and where the sources come from. This is not the case with LLMs, where it’s more a “black box” kind of situation. Currently, it’s not even public information what data exactly that ChatGPT has been trained on (Sellman, 2023). Finally, LLMs are known to “hallucinate”, meaning they simply make things up and produce false information. The tone in which this information is presented is the same as other information, so it’s difficult for the user to tell the difference (Ackerson, 2023). Newspapers and media outlets can produce false information too, but there may be more fact-checking going on. With an LLM, it’s just you engaging with the output. This means that the model could contribute to an echo chamber, reinforcing your existing beliefs.
To respond to these objections, it’s important not to forget about Reinforcement Learning from Human Feedback (RLHF), which often is a stage of the alignment process. It takes the form of a human choosing the best of two answers given by the LLM or using the thumbs up/down feature on answers, which is also part of ChatGPT (Blain, 2023). Human beings are thus also part of the information output-process with LLMs. Though it of course makes the model vulnerable to further human biases, this would also be the case for articles passing through a newsroom, arguably to an even greater degree. If we are alright with exposing news to human biases through an editorial team, I would argue that we should also accept this in partisan LLMs. The fear of creating an echo chamber does not apply solely to partisan LLMs. Arguably media outlets have a big financial incentive to create this type of news echo chamber too, where the outrage will generate more clicks. The LLM is not incentivised in such a manner. Furthermore, though some people end up anthropomorphizing LLMs, I think in general users are aware that these models are fallible. In a recent study of attitudes towards AI, most people still trust humans over AI by a wide margin (Koetsier, 2023). Conversely, you might believe your favourite journalist or newspaper simply in virtue of your relationship with them. There is an emotional connection because they are human too. I’d agree, however, that ChatGPT’s tendency to “hallucinate” is an epistemic concern. The possibility of widespread misinformation through LLMs should be taken seriously. But I think that this can be addressed, and it seems to be something which industry experts are working on solving (Fridman, 2023). I do not think the danger of false information or conspiracy theories are a strong enough epistemic reasons to prevent people from engaging with whatever media they choose, or with partisan LLMs, because there are other more sustainable ways of dealing with this issue. We might wish to increase the media literacy and critical thinking skills of users, rather than restricting what they can have access to. This solution seems to me a better solution than limiting inquiry, with greater epistemic benefits and independence. I’d argue that it to a greater degree respects people’s capacity to form own opinions and rationally pursue their conception of the good.
The fact that LLMs are not exactly like the media can be also a strength. There is an immediacy with LLMs, whereas traditional media has news cycles. This makes LLMs more accessible and flexible to a broader group of people. The level of abstraction and detail can be adjusted to the individual’s needs, which might enable them to make a more informed decision when it is time to vote. It is an epistemic benefit because more people will be able to engage in inquiry. In other words, a selection of partisan LLMs could help people gain access to more knowledge. Partisan LLMs have aspects which are different to partisan media, but I do not think they are detrimental to my point. The case that I have presented here is that because we allow partisan media, with all the complicated side effects that has, we have good reason to allow partisan LLMs too. While there is a chance that partisan LLMs will skew people toward false beliefs and conspiracy theories, this chance also exists with media. I have argued that it does not constitute a strong epistemic reason against engaging with various media or LLMs. A possible solution could be to improve people’s critical thinking abilities. Overall, increased access to information seems to be an epistemic benefit to both the individual and eventually to society. I will explore these points in the next section.
Section 3: Epistemic benefits
I will present the argument that having a variety of partisan LLMs would be of epistemic benefit to the individual. Having a selection would mean that the individual can choose which version to interact with and through the help of this powerful tool they could explore and strengthen their personal opinions. Marginalized or unpopular views are not freely discussed in society at present. It would be helpful for individuals, on epistemic grounds, to develop and engage with their viewpoints without being dominated by “stronger” speakers. By stronger speakers, I mean those who currently dominate the discourse or set the tone within a group.
3.1 The individual
Imagine a woman who grew up in a socially liberal country but holds a deep belief that gender is biological. This woman, Anna, would like to explore her opinion more but is aware that a majority of people around her would disagree with her views. Given that Anna’s ideas and opinions are outside of the mainstream, it would be of epistemic benefit to her to be able to discuss and deliberate without being told her opinions are wrong. Being able to deliberate alongside a partisan large language model would create a small group or “enclave” for Anna to develop her ideas in. Enclave deliberation is a process of deliberation among likeminded people, who talk or live in isolated enclaves (Sunstein, 2002, p. 177). The benefit of deliberating in an isolated enclave is that her view has the chance to be developed and eventually heard. For someone like Anna, social homogeneity is an obstacle to her deliberation. Because her view is likely considered incorrect in a socially liberal society, it would not be given much weight and risk not being heard at all. This indicates that social homogeneity might be an obstacle for good deliberation in general, but I shall return to this point later. Having access to a partisan LLM, which doesn’t continuously caution against her opinions, would enable Anna to create her own enclave to hone her arguments and feel a sense of support for her views.
A response to this argument is that the constant reinforcement of enclave deliberation might lead to the creation of an echo chamber. If Anna does not encounter any challenges to her thinking, then she is just reinforcing her own biases. Worse still, enclave deliberation might lead to increased extremism or even violence in the form of terrorism (Sunstein, 2002, p. 187). Having a partisan LLM to power this kind of deliberation would be disastrous. While such an objection is important to take note of, enclave deliberation should not be too quickly dismissed. Obviously, nobody wants a repeat of the 9/11 attacks or the like, and the thought of powerful AI in the hands of terrorists is scary. But extreme positions in themselves are not necessarily wrong. How extreme positions go down in history often depends on what is being argued for. The move toward ending slavery was surely also an extremist position at the time, springing up from sustained enclave deliberation. Likewise, one could mention the movement for women’s right to vote, the movement towards religious freedom, or the right to free abortion. Enclave deliberation is an important part of safeguarding against social injustice. If enclave deliberation were discouraged completely, individuals would not be able to hone their views into arguments which could persuade a larger part of the population and create social change. ChatGPT currently will append messages of caution to its answers when you engage it in certain topics of conversation. The user is reminded that there are different opinions on a given topic and to always have a balanced conversation (Fridman, 2023). This can prevent effective stream of consciousness inquiry or chain of thought styles of deliberation. In many ways it is useful to have a balanced discussion and be cautious when we make assertions, I agree. But if one is constantly reminded to take the opposite position’s viewpoint into consideration when exploring a topic, this could be a hindrance to one’s deliberative process, at least at an early stage. Having a large language model at hand which can help you deliberate in the direction you wish to explore, without breaking your flow, could aid deliberation. This deliberation could in turn aid social justice movements and possibly create positive social change which might otherwise be stifled.
Note that I am not saying that Anna’s espoused views on gender are necessarily fighting some great injustice. But importantly, I’d argue that we also don’t know that Anna’s views aren’t fighting a great injustice. Given that at present, views which might be held as axiomatic in one field aren’t also seen as axiomatic in another, it seems there should be room for such topics to be open in our interactions with LLMs too. For example, feminist philosophy or gender studies may consider the view that gender is not biological as axiomatic, but the same is not a given in the fields of medicine or biology. The view that homosexuality is permissible may be axiomatic in most fields, but maybe still up for debate in theology or philosophy (Simpson and Srinivasan, 2018, p. 16). In other words, we don’t know which field’s axioms should be the dominant ones. As I’ve argued in previous sections, this difference in axioms or values ought to be reflected in large language models, so that our own deliberation could be supported by these interactive tools. Instead of having only left-leaning versions of ChatGPT or Google Bard, representing a select few axiomatic commitments, there could be epistemic benefits to engaging with versions of LLM which represent the axioms of other fields that aren’t currently dominating. Given the above discussion, one could argue that Anna would benefit from being able to interact with an LLM that is politically aligned with her, because the opposing view that gender is not biological is axiomatic only in some fields.
An objection to my argument might be that it’s unlikely for a partisan LLM to be an epistemic good to Anna, because an important part of deliberation is debating and hearing opposing views (Sunstein, 2002, p. 186). The partisan LLM is of no real epistemic benefit to Anna because what she is experiencing is just a form of confirmation bias. To reply, I’d argue that the issue being pointed out here is not that enclave deliberation supported by a partisan LLM is unhelpful, but that staying in isolation is. I tend to agree with this sentiment, and I don’t think that having partisan LLMs for inquiry and deliberation would necessarily exclude the possibility of the important debate. Rather, a politically aligned model would just help Anna, who might not have had the resources to go to university or engage with academic literature, to explore and strengthen her own opinions first. It would be a good dialogue partner for Anna to explore her beliefs, which is separate from the point about engaging in a debate with opposing viewpoints.
Granted, if the individuals in the enclave do stay isolated, then participating in enclave deliberation seems to be less obviously an epistemic good to those individuals, particularly if they end up believing something which is false. However, were this the case, it might still be beneficial to society because their strong viewpoint would help foster viewpoint diversity. This will be my point of discussion in the following.
3.2 Society
While there may be some epistemic drawbacks for individuals who don’t engage in debate outside of their deliberative enclaves, being able to hone one’s viewpoint through partisan LLMs could provide a richer selection of viewpoints in society. Here I will argue that having a variety of viewpoints for discussion, even if they are controversial, is an epistemic good for society at large. The opportunity to engage with partisan LLMs would help the proponents of those viewpoints present their arguments in their strongest form.
To support this argument that viewpoint diversity is epistemically desirable, we can turn to chapter 2 of Mill’s On Liberty (1975), which concerns the suppression of speech in the public square. The epistemic argument here is that if the status quo opinion is false and the opposing view is true, then by suppressing the opposing view we are depriving ourselves of the opportunity to correct our own view. On the other hand, even if the opposing view is false and we can show that it is so, by suppressing it we are also depriving ourselves of something, namely “the clearer perception and livelier impression of truth, produced by its collision with error.” (Mill, 1975, p. 18). In other words, we are missing out on a better understanding of the truth if we don’t engage with the false view. When engaging with someone who has strong arguments for what they actually believe in, even if you are convinced that they are false, it forces you to practice your own arguments and will foster better understanding of your own views. It thus seems that more variety in viewpoints is an epistemic good for everyone, because even if some false views are represented, all parties involved can learn from them. Following this, we might also see why social homogeneity would be an obstacle for good deliberation (Sunstein, 2002, p. 177). Not only does it hinder the individual in pursuing their viewpoint, it also limits the argument pool available in society, from which we can all benefit. If users are free to use a variety of large language models to pursue their viewpoints, there will likely also be a larger pool of arguments available for discussion later on. Having a range of partisan LLMs between which people could choose would therefore be of epistemic benefit to society.
An objection to my argument might be that minorities in society could just deliberate in their own enclaves and foster viewpoint diversity through existing means, rather than use a large language model. The internet has many chatrooms already where all manners of opinion are discussed. We have deliberative enclaves in society that meet regularly, for example church-goes or grass-roots groups. It seems there is already ample room for developing viewpoints to strengthen the variety of arguments available in society; placing a powerful tool in the hands of people with fringe views to help them develop their position is unnecessary and a mistake. I have two responses to this. First of all, I think we can refer back to the argument about epistemic paternalism here. I don’t see that some people necessarily have a right to decide over others in what ways they may deliberate. Particularly when considering the aforementioned social justice movements, which were also considered fringe or minority at one point in time. Second of all, LLMs are indeed powerful tools. It does not seem quite fair that they only serve some groups in society under the assurance of neutrality (Rozado, 2022). My focus here is not to advocate for some right or entitlement to LLMs, but rather recognizing the potential epistemic advantage of such tools. Just as the current LLMs offer epistemic benefits to more left-leaning groups, this same argument would support the existence of partisan LLMs for other groups. It is desirable that everyone should be able to enjoy the epistemic benefits of partisan LLMs.
One could again ask whether all value systems should be represented in their own partisan LLMs. I think we’re in a better position at this point of the discussion to say that it’s not quite evident why this should not be the case, from an epistemic point of view. Particularly because there seem to be epistemic benefits for both the individual and society to be able to pursue deliberation along one’s own values. There will be cases where we might want to restrict some viewpoints for moral reasons, but from the epistemic point of view I’ve argued that there is value in engaging with beliefs even when we think they are false. One might go as far as saying that someone who’s against the sexist and eugenicist viewpoints mentioned earlier might also gain epistemically from engaging with that LLM. Michael Veber argues that we can gain from engaging with flat-Earthers, because it brings their ideas into “real contact” with our minds (2021, p. 6). Facing an opponent who actually believes and earnestly defends their views brings you into proper contact with the contrary position, so you can’t just dismiss and ignore it but actually might have to think about it. An objection in this case could be that an LLM can’t actually “believe” anything because it is not human. But I think as long as your mind is being engaged in the contrary argument, the point remains the same. Veber uses the example of a professor who spent a semester energetically pretending to be a mind-body dualist, benefiting both himself and his students with new arguments and research (ibid., p. 6). From an epistemic perspective, what we care about is the promotion of knowledge and establishing claims and beliefs. It seems both people who would be aligned with the partisan LLM and people who are of contrary opinions would benefit epistemically from engaging with various politically aligned LLMs.
Another objection might be that the viewpoint diversity fostered through partisan LLMs would just increase polarization. The benefits of viewpoint diversity for society would be called into question if people are never in dialogue with each other. The fears that partisan LLMs could cause polarization should be taken seriously. Polarization could for example happen through the generation of false information. During the 2016 US presidential election, we saw social media platforms being used as “vectors for misinformation” (Robins-Early, 2023). Content produced by large language models could similarly affect the political landscape through large amounts of human sounding text. This might also make it easier for foreign countries to influence elections. In the face of huge letter writing campaigns generated by partisan LLMs, legislators might not be able to tell the difference between fake engagement and actual concerns from voters (Kreps and Kriner, 2023). False information and the influence this would have could mean that various groups in society never engage in actual debate with each other and instead stay in their deliberative enclaves. That would be detrimental to any epistemic benefits we’d hope to gain from viewpoint diversity. As a response, these scenarios are just as likely to happen with the LLM technology we already have – only the content generation would be happening towards one partisan leaning. False information and fake news are certainly of epistemic concern, one which large language models will likely make worse, at least for a while. But it would not be unique to partisan LLMs. This should make it all the more pressing to ensure that everyone is taught how to be good independent critical thinkers. It seems to be the best solution to help guard against misinformation, while safeguarding the epistemic benefits one might get from engaging with a powerful tool like LLMs.
Returning to the point that some might feel discontent that the major LLMs are only left-leaning partisan, this feeling of injustice could drive the creation of alternative LLMs anyway. As mentioned, ChatGPT has seen instances of jailbreaking, because users want to modify the model to be more aligned with what they wish to see. Sam Altman addresses this in his interview with Lex Fridman, saying the newly released GPT-4 attempts to accommodate for this by introducing “system messages” (Fridman, 2023). Essentially this means that GPT-4 can be made partisan in your direction because you’re in a sense in charge of the guardrails. This is an interesting direction for OpenAI to go. Potentially it’s just a business decision to prevent users from leaving due to the aforementioned left-leaning bias. Another reading could be that OpenAI acknowledges that there is no good reason for ChatGPT to be left-leaning partisan, when it was intended to be a neutral tool for everyone. I’d argue it’s of greater epistemic benefit for more people if they are able to tailor the experience GPT-4 to be partisan in their direction, for deliberative purposes. They’d still be operating within the model which OpenAI has created, however. My bigger point here is that the creation of partisan LLMs will most likely happen soon, which is particularly evident if leading corporations like OpenAI are already preparing themselves for it. I think it’s worth considering how these developments could be epistemically beneficial to users and advocate for educating people better as critical thinkers. Instead of leaning towards restrictions on LLMs and promoting one set of values over another, it could be beneficial to society to have many viewpoints, developed through LLMs, if we also encourage actual debate and critical thinking.
Conclusion
I have argued that there are compelling epistemic arguments to support partisan large language models. I placed this discussion into the context of the currently fraught political landscape ripe with misinformation and a rising number of platforms for alternative news and social interaction. I started by looking at the problem of living in a society with competing conceptions of what framework of values is the right one. I established that Elon Musk shouldn’t be in control of what counts as “the truth”, but neither should Sam Altman, or any one small segment of society, for that matter. There are many competing conceptions of values in society, and I’ve argued that large language models should reflect this too. Having more values represented could contribute to the promotion of knowledge and help establish claims and beliefs. I then considered whether the disagreement between Musk and Altman could count as evidence in itself. Given that there isn’t agreement among the experts on this topic, it indicates to lay-people that things could or should be done differently.
In Section 2, I argued that limiting LLMs to only being partisan in one direction amounts to epistemic paternalism. I discussed why epistemic paternalism should be avoided and considered what types of harm might justify being paternalist anyway. My main concern was that restricting certain avenues of inquiry might come at too high of an epistemic cost and lead to unintended backlashes. I drew on the case of partisan media to support my argument that users might benefit epistemically from having access to a wide variety of partisan large language models too.
Finally, I argued that there are benefits to the individual by having partisan LLMs, because it could aid deliberators in exploring their own views and opinions in small deliberative enclaves. This benefits not only the individual but also society at large, because it could help foster viewpoint diversity. I considered a range of objections, particularly that the purported epistemic benefits from viewpoint diversity would not manifest themselves if society became too polarized and people didn’t actually engage in debate with each other. The outlined objections have shown that there are limits to the epistemic considerations and some things might be outweighed by moral or practical reasons. This would be up to the policy makers to decide. However, I hope that this overview of the epistemic benefits of partisan LLMs could prevent policy makers from too quickly dismissing the benefits there might be to pursuing the path of partisan LLMs.
The scope of my dissertation has been to focus on the relevant epistemic arguments. My intention has been to show that there are compelling epistemic arguments to consider when deciding on policy. Hopefully it will be taken into consideration that there are aspects to partisan large language models which could be of epistemic value to both individuals and society.
Bibliography
Ackerson, N. (2023). “GPT is an unreliable information store”, Towards Data Science, 21 February. Availabe at: https://towardsdatascience.com/chatgpt-insists-i-am-dead-and-the-problem-with-language-models-db5a36c22f11. (Accessed: 4 August 2023).
Baum, J. And Villasenor, J. (2023) “The politics of AI: ChatGPT and Political Bias”, Brookings, 8 May. Available at: https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/. (Accessed: 4 August 2023).
Birch, J. (2017) “Animal sentience and the precautionary principle”, Animal Sentience. 2(16), pp. 1-16. Available at: http://eprints.lse.ac.uk/84099/. (Accessed: 4 August 2023).
Birks, D. (2014) “Moral Status and the Wrongness of Paternalism”, Social Theory and Practice. 40(3) pp. 483-498. Available at: https://doi.org/10.5840/soctheorpract201440329.
Blain, L. (2023) “Mightier than the sword: OpenAI’s impossible truth and bias dilemmas”, New Atlas, 27 April. Available at: https://newatlas.com/technology/openai-chatgpt-bias-truth/. (Accessed: 4 August 2023).
Christensen, D. (2007). “Epistemology of Disagreement: The good news.”, The Philosophical Review. 116(2), pp. 187-217. Available at: https://www.jstor.org/stable/20446955. (Accessed: 4 August 2023).
Christensen, D. (2014) “Disagreement and Public Controversy”, in J. Lackey (ed.) Essays in Collective Epistemology. Oxford: Oxford University Press, pp. 142-164. Available at: https://doi.org/10.1093/acprof:oso/9780199665792.003.0007.
Dang, S. and Roumeliotis, G. (2022) “Musk begins his Twitter ownership with firings, declares ‘the bird is freed’”, Reuters, 28 October. Available at: https://www.reuters.com/markets/deals/elon-musk-completes-44-bln-acquisition-twitter-2022-10-28/. (Accessed: 4 August 2023).
Deepak P. (2023) “ChatGPT is not OK! That’s not (just) because it lies”, AI and Society, 25 April. Available at: https://doi.org/10.1007/s00146-023-01660-x.
Eke, D. O. (2023). “ChatGPT and the rise of generative AI: Threat to academic integrity?”, Journal of Responsible Technology. 13, article number 100060. Available at : https://doi.org/10.1016/j.jrt.2023.100060.
Ferrara, E. (2023) “Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models.” To be published in Machine Learning with Applications, [Version 2, 18 April]. Available at: https://doi.org/10.48550/arXiv.2304.03738.
Ferretti, T. (2021) “The ethics and politics of artificial intelligence” LSE Blog, 14 July. Available at: https://blogs.lse.ac.uk/businessreview/2021/07/14/the-ethics-and-politics-of-artificial-intelligence/. (Accessed: 4 August 2023).
Fox and Friends. (2023) Elon Musk sits down exclusively with Tucker Carlson to discuss dangers of AI. 17 April 17. Available at: https://www.foxnews.com/video/6325259104112. (Accessed: 4 August 2023).
Fridman, L. (2023) Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the future of AI|Lex Fridman Podcast #367. 25 March. Available at: https://www.youtube.com/watch?v=L_Guz73e6fw&ab_channel=LexFridman. (Accessed: 4 August 2023).
Gabriel, I. (2020) “Artificial Intelligence, Values, and Alignment”, Minds and Machines. 30, pp. 411-437. Available at: https://doi.org/10.1007/s11023-020-09539-2.
Gutting, G. (2012) “On Political Disagreement”, The New York Times, 2 August. Available at : https://archive.nytimes.com/opinionator.blogs.nytimes.com/2012/08/02/on-political-disagreement/. (Accessed: 4 August 2023).
Hartmann, J., Schwenzow, J. and Witte, M. (2023) “The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation”, 1 January. Available at: http://dx.doi.org/10.2139/ssrn.4316084.
Hu, K. (2023) “ChatGPT sets record for fastest growing user base – analyst note”, Reuters, 2 February 2. Available at: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. (Accessed on: 4 August 2023).
Johnson, A. (2023) “Is ChatGPT partisan? Poems about Trump and Biden raise questions about AI Bot’s bias – Here’s what experts think”, Forbes, 3 February. Available at : https://www.forbes.com/sites/ariannajohnson/2023/02/03/is-chatgpt-partisan-poems-about-trump-and-biden-raise-questions-about-the-ai-bots-bias-heres-what-experts-think/. (Accessed: 4 August 2023).
King, M. (2023) “Meet DAN – The ‘JAILBREAK’ Version of ChatGPT and How to use it – AI Unchained and Unfiltered”, Medium, 5 February. Available at: https://medium.com/@neonforge/meet-dan-the-jailbreak-version-of-chatgpt-and-how-to-use-it-ai-unchained-and-unfiltered-f91bfa679024. (Accessed 4 August 2023).
Knight, W. (2023) “Meet ChatGPTs Right-Wing Alter-Ego”, Wired, 27 April. Available at : https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/. (Accessed: 4 August 2023).
Koetsier, J. (2023) “In AI we do not trust: Survey”, Forbes, 5 June. Available at : https://www.forbes.com/sites/johnkoetsier/2023/06/05/in-ai-we-do-not-trust-survey/. (Accessed: 4 August 2023).
Kreps, S., and Kriner, D. (2023) “The Potential Impact of Emerging Technologies on Democratic Representation: Evidence from a Field Experiment”. To be published in New Media and Society [Version 21 February]. Available at: https://ssrn.com/abstract=4358982. (Accessed: 4 August 2023).
McAfee, N. (2018) “Feminist Philosophy” in E. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Fall edn. Available at : https://plato.stanford.edu/archives/fall2018/entries/feminist-philosophy. (Accessed: 4 August 2023).
McGee, R. W. (2023) “Is ChatGPT Biased Against Conservatives? An Empirical Study”. Working paper, Fayetteville State University. Available at: https://ssrn.com/abstract=4359405.
McIntyre, L. (2019) “Calling All Physicists”, American Journal of Physics. 87 (9), pp. 694-695. Available at: https://doi.org/10.1119/1.5117828.
Mikkola, M. (2020) Feminism and Philosophical Method [Recorded Lecture]. Special Subject in Philosophy: Feminism and Philosophy. Oxford University, November 2020.
Mill, J. S. (1975) On Liberty. Rev. edn. D. Spitz (ed.). New York: W.W. Norton and Company.
Mitchell, A., Gottfried, J., Kiley, J., and Matsa, K. (2014) “Section 1 : Media Sources: Distinct Favorites Emerge on the Left and Right”, Pew Research Center, 21 October. Available at : https://www.pewresearch.org/journalism/2014/10/21/section-1-media-sources-distinct-favorites-emerge-on-the-left-and-right/. (Accessed: 4 August 2023).
Onishi, N. (2021) “A Fox-Style News Network Rides a Wave of Discontent in France”, New York Times, 14 September. Available at : https://www.nytimes.com/2021/09/14/world/europe/france-cnews-fox-far-right.html. (Accessed: 4 August 2023).
Quong, J. (2011) Liberalism Without Perfection. Oxford: Oxford University Press.
Robins-Early, N. (2023) “Disinformation reimagined: how AI could erode democracy in the 2024 US elections”, The Guardian, 19 July. Available at : https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections?CMP. (Accessed: 4 August 2023).
Roth, E. (2023) “Elon Musk claims to be working on ‘TruthGPT’ – a ‘maximum truth-seeking AI’”, The Verge, 18 April 18. Availabe at: https://www.theverge.com/2023/4/17/23687440/elon-musk-truthgpt-ai-chatgpt. (Accessed: 4 August 2023).
Rozado, D. (2022) “Where does ChatGPT fall on the Political Compass?”, Reason, 13 December. Available at : https://reason.com/2022/12/13/where-does-chatgpt-fall-on-the-political-compass/. (Accessed: 4 August 2023).
Rozado, D. (2023a) “The Political Biases of ChatGPT”, Social Sciences, 12(3), pp. 1-8. Available at: https://doi.org/10.3390/socsci12030148.
Rozado, D. (2023b) “The Political Biases of Google Bard”, Rozado’s Visual Analytics, 28 March. Available at : https://davidrozado.substack.com/p/the-political-biases-of-google-bard. (Accessed: 4 August 2023).
Rozado, D. (2023c) “Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI systems”, Manhattan Institute, 14 March. Available at: https://manhattan.institute/article/danger-in-the-machine-the-perils-of-political-and-demographic-biases-embedded-in-ai-systems. (Accessed: 4 August 2023).
Russell, S. (2019) Human Compatible: AI and the Problem of Control. Bristol: Allen Lane.
Sellman, 2023. Used earlier.
Sellman, M. (2023) “ChatGPT will always have bias, says OpenAI boss”, The Times, 27 March. Availabe at: https://www.thetimes.co.uk/article/chatgpt-biased-openai-sam-altman-rightwinggpt-2023-9rnc6l5jn. (Accessed: 4 August 2023).
Simpson, R., and Srinivasan, A. (2018)“No Platforming.” In J. Lackey (ed.) Academic Freedom. Oxford: Oxford University Press, pp. 1-30.
Smith, M. (2017) “How left or right-wing are the UK’s newspapers?”, YouGov UK, 7 March. Available at : https://yougov.co.uk/topics/politics/articles-reports/2017/03/07/how-left-or-right-wing-are-uks-newspapers. (Accessed: 4 August 2023).
Stocking, G., Mitchell, A., Matsa, K., Widjaya, R., Jurkowitz, M., Ghosh, S., Smith, A., Naseer, S., and St Aubin, C. (2022)) “The Role of Alternative Social Media in the News and Information Environment”, Pew Research Center, 6 October. Available at : https://www.pewresearch.org/journalism/2022/10/06/the-role-of-alternative-social-media-in-the-news-and-information-environment/. (Accessed: 4 August 2023).
Sunstein, C. (2002) “The Law of Group Polarization”, The Journal of Political Philosophy. 10(2), pp. 175-195. Available at: https://doi.org/10.1111/1467-9760.00148.
Thau, T. (2023) “Disagreement” [Recorded lecture]. PH458: Evidence and Policy. The London School of Economics and Political Science. 16 November. Available at: https://echo360.org.uk/lesson/G_d28058f0-ad08-45ee-9811-3063c5c5dda7_4aef6e2b-2b4f-4ec0-9ed2-6224e68ec4db_2022-11-16T14:00:00.000_2022-11-16T15:00:00.000/classroom#sortDirection=desc. (Accessed: 4 August 2023).
The Telegraph Investigations Team. (2023) “Exclusive: Ministers had ‘chilling’ secret unit to curb lockdown dissent.”, The Telegraph, 2 June. Available at : https://www.telegraph.co.uk/news/2023/06/02/counter-disinformation-unit-government-covid-lockdown/. (Accessed: 4 August 2023).
University of Birmingham (2022) Lack of trust in public figures linked to COVID vaccine hesitancy says new research. Available at : https://www.birmingham.ac.uk/news/2022/lack-of-trust-in-public-figures-linked-to-covid-vaccine-hesitancy-says-new-research. (Accessed: 4 August 2023).
Veber, M. (2021) “The Epistemology of No Platforming: Defending the Defense of Stupid Ideas on University Campuses”, Journal of Controversial Ideas. 1(1), pp. 1-20. Available at: https://journalofcontroversialideas.org/article/1/1/133. (Accessed: 4 August 2023).
Vincent, J. (2023) “Elon Musk and top AI researchers call for pause on ‘giant AI experiments’”, The Verge, 29 March. Available at : https://www.theverge.com/2023/3/29/23661374/elon-musk-ai-researchers-pause-research-open-letter. (Accessed: 4 August 2023).
Viswanath, K., Bekalu, M., Dhawan, D., Pinnamaneni, R., Lang, J., and McLoud, R. (2021) “Individual and social determinants of Covid-19 vaccine uptake”, BMC Public Health. 21, article number 818. Available at: https://doi.org/10.1186/s12889-021-10862-1.
Vredenburgh, K. (2023) “Applications: AI value alignment” [Recorded lecture]. PH425: Business and Organisational Ethics. The London School of Economics and Political Science. 28 February. Available at: https://echo360.org.uk/lesson/G_073baffe-851e-48c3-af28-03fd25e5c6ee_f841df35-9cb5-4f1c-9a44-dba07974ad49_2023-02-28T12:00:00.000_2023-02-28T13:00:00.000/classroom#sortDirection=desc. (Accessed: 4 August 2023).