Home

  • About My Research: Healthy Interactions Online

    Online harms are a widespread concern. People may be stigmatised, excluded or abused when connecting across significant divides in their views and identities, and discussing contentious subjects. Online social networks enable these connections to an extent not possible before.

    So how can we help ensure our interactions online are healthy and inclusive, especially when discussing difficult subjects subjects such as religion, science and LGBTQ+ lives?

    My research uses computer coding including machine learning to analyse broad trends and patterns, allowing us to see what helps encourage healthy interactions online.

    I focus on intersections where people have discussed mainstream religion alongside science, LGBTQ+ lives or other minority groups. This can bring conflict, but it does not always, and fostering inclusivity need not require lots of top down control – there are subtle things people can do to encourage healthy discussions. I will promote my findings and add to discussions about improving outcomes when people engage online about contentious subjects, which may include aspects of their very identity and ways of understanding the world.

  • Religion, AI and seeking certainty

    In uncertain times, religion can provide a sense of clarity about a person’s life or ultimate destination, and where they fit into the world. However, societies include diverse religions and many people who have none, which can bring uncertainty about what’s true. This can be reduced by joining a group or holding a worldview that leans on a certain and benevolent authority. While it may not last, this can be alluring.

    At the same time for some people artificial intelligence, as it is often portrayed in media, can encourage reliance on its authority, perhaps with some fear when it is portrayed as transcending human limitations. And in some ways like religion, this authority can be made approachable through certain acts or rituals. Indeed, this accessibility is heightened when media portrayals of AI focus on generative text chat models since they present human-like qualities and are often portrayed as anthropomorphised robots – beautiful, androgynous, and intelligent.

    This can lead people to talk about AI algorithms as agents with their own identity, leaning into their responses and feeling thankful for their recommendations. This does not account for the human labour at the heart of the datasets that shape and train AI models and the computer code that produces these outputs. Nor does it account for how AI algorithms’ outputs are probabilistic or contingent on certain premises, which may not be clear. Of course, marketers can lean into this perception of intelligence and even infallibility when promoting AI products and services and besides, cultivating certainty and reliability can yield sales.

    Researchers can raise critical awareness of the realities of AI but they need to know how the coding that underlies these algorithms works. A (perhaps) small but increasing proportion of non-computer science scholars have critical understanding at this intersection and I hope to contribute through my research. This perspective will be essential to helping people remain realistic about how AI and machine learning can benefit their lives while retaining the vulnerability of knowing our machines don’t have all the answers to life’s questions.

  • LGBT+ people and creativity in tech

    Why might LGBT people particularly find a home in creative tech fields such as gaming and research? Of course these fields are home to people from myriad backgrounds – it’s just notable that LGBT people are included here when they can feel less well-represented in some other spaces.

    First, such fields require deep immersion in learning a technical subject. LGBT people sometimes invest this at a young age and they may deepen their passion during teenage years at a time when others were going out experiencing more social and romantic relationships. At times, LGBT teenagers have been excluded from these experiences due to stigma or bullying. Here then, creative technical fields provide a means of exploring diverse new worlds which do not conform to the restrictions of our own. They also provide a means of belonging and identifying through affiliation with, and knowledge of, something which others just don’t understand. I believe that for some LGBT people, gaming and coding more broadly provide that social world in place of the schoolyard.

    Indeed, in the UK, the section 28 legislation under which many LGBT people went to school prohibited positive discussion of LGBT lives and relationships and this further encourages investment in something separate, which provides escape, and which in the fullness of time leads to creative expression and a professional career. This deep investment in abstract and imaginative hobbies connects LGBT people with a world that entails less vulnerability than the social worlds they inhabited as young people.

    I also argue that LGBT people have a conviction to create new worlds and possibilities that correct injustices and empower those on the margins. Gaming affords this, as does social research which seeks to highlight inequalities and bring change. Of course people of many backgrounds seek the same and may find themselves alongside LGBT scholars and colleagues who strive to articulate, and advocate for, a world of greater acceptance and the valuing of those whose talents and abilities lie outside the mainstream.

    Time can do the work of change but it relies on people using their voices, bodies and minds to paint what is possible and shine lights onto what needs to be done. Time is necessary, but so is creating art and research which shows us the changes that time must be used to enact.

  • Why do polarised groups connect online?

    Social media brings people with opposing views into contact more frequently than in other aspects of our social lives. These interactions can generate more heat than light. Why might this happen? This post explores some reasons.

    Discussion on this subject often focuses on how social media platforms’ algorithms recommend content to encourage engagement with their platform. People may be more likely to interact with something that sparks a fire within them, so platforms connect users along these lines. However, those who write these algorithms may not intend negative interactions. Content recommendation can still connect people with polarised views even if it’s not intentionally set up that way. This is down to how people’s behaviour is used to recommend content to them. Association rules mining (ARM) is a machine learning technique often used here. ARM looks for ‘item sets’ which identify that every time a person using the service carries out a certain action, there is a range of things they could do next, each of which have different probabilities. For example, on a film streaming website perhaps two horror films – the second having themes similar to the first – form a common item set as people often watch the second film after they watch the first. Therefore, the platform will recommend the second film as the first one finishes, increasing use of the service and satisfying subscribers. These item sets are based on datasets of users’ historical interactions. On social media, if a person engages with certain accounts or messages and then gets riled by a second account with opposing views and interacts with it, other followers of the first account may be served content from the second. Here, a seemingly innocuous means of recommending content has increased exposure to opposing, perhaps antagonising messages.

    The underlying structure of the internet also connects people in ways that other ‘offline’ social networks may not. Away from social media, people form groups of friends and other social networks by choice, resonating with those similar to them. Birds of a feather flock together. Of course the internet connects people who are dispersed spatially around the world, cutting across these chosen networks. Further, the underlying technologies – such as packet switching which routes data across decentralised online networks – afford near real time communications between large networks of people in ways that predecessor media technologies such as phone and radio simply did not. Connections are less linear than before and less within participants’ control.

    In addition, people may connect online based on a topic of discussion, rather than their views on it. In previous generations, joining clubs or associations often meant aligning with a certain political party, sports team or cause. Now people connect via hashtags, social media groups and populate online discussion boards based around certain topics, notwithstanding the diversity of views they hold around them. This brings people into contact who have little in common save for the discussion topic, and the full richness of other participants’ lives are lost in the online environment which foregrounds the topic at hand and how people differ on it.

    So how might we respond to these challenges to ensure our online interactions remain engaging and useful? In a future post, I’ll review approaches that moderators, platform owners and social media participants have taken in the past to encourage healthy interactions online while also holding in tension the need for people to express themselves authentically.

  • The power of our language to shape AI

    Machine learning has attracted much attention in recent years as researchers and developers have extended its capabilities to classify and predict human language. This powers diverse technologies such as smartphone digital assistants, internet search engines and social media content moderation.

    Indeed, diverse uses for AI have been of longstanding interest to researchers, as reflected in discussions of AI chess among academics and scientists on early online social networks. However, what has changed is the scale of computing power which enables AI to be taught using very large datasets and undertake more complex computation. In the case of chess, this led to AI trained on large numbers of historical chess matches between people, then later through high numbers of training games in which the AI plays itself and develops strategy iteratively. The book Game Changer (2019) provides a detailed and fascinating account of recent developments.

    In the case of AI which dialogues with users in human language, training often involves analysing text online to learn how language is used. This results in an AI model, which is to say an abstract representation of how the components of the language relate to one another. The model is often then refined through automated iterative testing. This can include masking some proportion of words in a given sentence and having the model predict the masked words or considering which of multiple sentences follows a certain sentence from the training data. The test results are then used to adjust the model and refine how it captures relationships between different words.

    Online text can shape how AI responds to people

    A concern arises when AI exhibits inappropriate biases in its responses. One reason for this is how it is trained. The language fed into the model during training informs how it predicts a suitable response when it is later used in services such as digital assistants and internet search engines. Crucially, and consequently, machine learning models may be shown to reproduce biases in society, such as replying stereotypically when asked to predict a man’s career and a woman’s career.

    Here, text has power in two ways. First, a vulnerable person engaging with these AI driven services may receive responses that reinforce certain unhelpful perceptions regarding their status, identity or the possibilities for their lives. This could impact a person’s choice, self-esteem and wellbeing more generally. Second, this means the language we use online can, taken together, affect how others experience bias or inclusion in their interactions with AI services.  This illuminates how AI has the power to amplify both negative and positive ways in which people are affected by what others say. By using thoughtful and inclusive language, we can help steer that power toward correcting and avoiding historical discrimination.

  • Can we protect people from harmful content online?

    On 16 December the UK Government (Department for Digital, Culture, Media and Sport) published a guide to the new Online Safety Bill, including how it will protect children and adults.  Much discussion has reflected on how the Bill seeks to protect children (and, in some circumstances, adults) from lawful but potentially harmful material. The guide indicates this would include, for example, online abuse and antisemitic content.

    How, in practice, can people be protected given the huge amount of content posted online?

    There are at least three parts to the solution which all have a role, but none of which are perfect. 

    People protect each other

    People are integral to identifying and flagging messages, images and videos themselves. This includes users of online social networks who report content via platforms’ built-in tools. On the service’s side, teams of people may be hired to review incoming or user-flagged content to see if it breaches terms of service or relevant laws. Of course this is difficult to scale due to the resource it requires, so online services may pay other companies to employ people on low wages to moderate content. This brings its own potential harms where people are exposed to upsetting content in long and intense working days.

    Machine intelligence reviews content at scale

    The content users or workers flag may be used to train machine learning models. Those models may then identify patterns and classify content, such as whether it should be allowed or blocked. They are termed supervised models when the classifications are already known and the model seeks to identify content that fits. Here, then, the views of the people who flagged content may influence what is classified by the model as needing to be blocked. Alternatively, the model may use other approaches such as searching for certain words or phrases which warrant a closer look. Machine learning approaches, underpinned by human intelligence, are much easier to scale than relying on human discernment alone and reduce the exposure of human moderators to some of the most clearly harmful (and so easily machine-classifiable) content.

    However, these machine learning approaches work on probabilities – they identify content that is likely to be acceptable and likely to be harmful, based on specified criteria. And these criteria certainly need to evolve over time. Just recently, I watched a video online about a horror film and the presenter used creative language to allude to the violent ways in which some characters met their ends. This will have helped the video avoid automated censoring.

    But perfection is the enemy of the good

    This inherent imprecision, whether humans or machines are reviewing content, means that it is perhaps impossible to protect people from all abusive content. This tempers the approach we must take, encouraging us to focus first on the lower-hanging fruit which may obviously cause emotional harm and/or lead to dangerous actions, while accepting we cannot crack the problem entirely.

    The Online Safety Bill recognises that in part, since the guide indicates how adults will in some cases need to have access to tools that reduce the likelihood of them being exposed to legal but harmful content. Nonetheless, it remains to be seen how some categories of illegal content that are subtle and nuanced, such as abuse or coercion, may be effectively moderated using human and machine intelligence, given the vast amount of material posted online and the above limitations. I hope to make a small contribution to this discussion through my research, which aims to spot factors associated with conflict in online spaces.

  • The need for visible LGBTQ+ people of faith online

    I was told by another Christian that gay people should not identify themselves as gay – that it shouldn’t be part of their identity. Of course that would diminish gay people’s control over how they are portrayed and understood in public discourse. By contrast, when LGBTQ+ people engage confidently with those who are neither familiar with nor sympathetic toward the realities of their lives, a healthy discomfort can develop which may enable people to learn and change their views toward them.

    It’s therefore essential that LGBTQ+ people are visible online.

    Since the early days of online networks, visibility has created space for LGBTQ+ people to connect, learn and be their authentic selves. This goes back at least as far as people discussing sexuality and gender online in the early 1980s. These pioneering discussions meant that queer people joining online discussions in later years had spaces carved out for them which enabled them to connect with others, learn and feel accepted. This is still the case. Isolated LGBTQ+ people take tentative steps to affirming themselves via online forums and social media.

    But connecting online entails vulnerability, even though some anonymity may be possible, as the structure of online social networks means people with opposing views, often strongly held, are connected with one another. And, for people who also have religious identities, in many churches and other contexts they may feel their faith, sexuality and/or gender are in juxtaposition. Therefore LGBTQ+ people benefit from integrating into a community that affirms and values their lives, resolving the internal tensions they may feel around faith and their sexuality and/or gender.

    But incivility and anger all too often dominate LGBTQ+ discussions online.

    In part, this is due to online networks connecting disparate groups who may otherwise not connect, and because some people perceive that online spaces afford the free speech of a public square. People may then feel emboldened or authorised to share harmful views on others who they perceive should not have the same rights as them.

    So how can we sculpt online spaces to support healthy interactions and avoid harms while accommodating disagreement? I aim to contribute to this discussion through my research by applying machine learning and close analysis to historical research datasets of online interactions. It’s clear that the structure and size of social networks, and behavioural norms embedded in them, influence people’s experiences in profound ways.

  • A PhD that impacts for good

    This post is the second that reflects on questions I have found useful during the PhD, and which doctoral students early in their research could ask themselves.

    Where will you publish or present your research?

    My last post mentioned identifying where to position your work among the community of people who research in your subject. When doing this, you’ll see whose work your research relates to and which professional journals and conferences serve as outlets for those researchers’ work. By identifying this you will see where you fit in, which helps inform which books, articles and arguments you will engage with in your writing and future conference presentations.

    And how will audiences outside the university find out about your research?

    Most (perhaps all) PhD research projects will have insights or ideas which are of interest to certain groups outside the university. You can share your work with others, whether giving a talk on your work at a meet-up in a pub, meeting groups in your community to share your insights, engaging with policymakers, or contributing to discussions via traditional or online media. And by considering what you want to see change because of your work (perhaps benefitting a particular community in society), as I discussed in the last post, you will have a pithy way to summarise your work and encourage interest. Done well, this can also be a service to your academic community, since it contributes to raising the profile of your academic discipline and shows its relevance to wide-ranging audiences.

    How do you want to develop your research after you finish the PhD?

    The PhD will most likely be the start of a research journey, not its end. Your research will point to more projects – perhaps testing your findings in another context, or following up leads that you discover but do not have space to explore in your PhD. While these will emerge as you develop your work, having an idea of your intended next steps will shape your research as it develops. Do you want to work in industry? Perhaps the methods you are learning, or the subject expertise you are developing, will become a focus for your CV, professional development and LinkedIn profile. Or might you seek funding to continue in academia? In that case, your research could develop into journal articles and you may bear this in mind as you shape your dissertation. I have learned the importance of considering this early on so you can practice proactive serendipity – putting yourself in luck’s way so you may spot opportunities that meet your aims. Some students reach the end of their PhD before thinking actively about their next steps, but you can be much more intentional than that.

  • A PhD that fuels your passion

    The PhD is many students’ first experience of long-term independent academic work. In this way it is substantially different from a taught university course such an undergraduate degree. This orientation can be stressful! In this blog post, I provide three questions I found useful to define and focus my PhD research on a subject that matters to me. These are particularly relevant to newer students and while it’s not possible to have all the answers from the start – the research and its place in the world develop over time – these questions can help focus you on how to target your work. Not yet a PhD student? This post will provide questions to consider as you come up with potential plans for your project.

    What would you like to see change because of your research?

    This question has benefited me in two ways. First, it has helped me to focus on the main purpose of my research, which is to better understand what can influence harms in online discussions so I can support educating people about those harms and reducing them. Your motivation may lead your research to have an external focus and seek a certain change in society, though in any case you will also contribute knowledge to your discipline within academia, which is itself a form of change. Perhaps you are motivated to shine a spotlight on a certain under-valued author or theoretical idea, which could contribute valuably to the subject of your research and your academic discipline. Your motivation helps provide a perspective and thread that can run through your dissertation, affirm its value and encourage you. Second, this question has helped me articulate my research. At times, when asked about my research I would talk about its methodology – what I was doing and how I was doing it – rather than why I am doing it. Focusing on the change I’d like to see has helped me engage people who may (justifiably!) not be interested in the minutiae of my methodology. So think – what would you like to see change as a result of your research within academia and/or wider society? That could provide a starting point for a valued research pathway.

    What will your research add to the conversation?

    This question required me to position my research in the context of other researchers’ work that relates to mine. That enabled me to specify what I am adding to their conversation. Without some familiarity with what others have done – initially through my Master’s degree and later through a formal review of literature for my PhD – I could not have answered this question. Since the PhD adds to academic conversation, I needed to know who was saying what. A perhaps lesser-used but useful place to start is the British Library EThOS service, which enables you to read recent (and old) full text PhD dissertations from many UK universities. You can sort by publication date and search by subject, which helps you see where people who recently completed PhDs in your area have positioned their work.

    How will you know when your research is finished?

    In the UK, a PhD may take six years part time and entail a 100,000 word dissertation, writing up research that may also entail (for example) interview transcripts, datasets, artwork and/or computer code. This may feel expansive, but the confines of time and word count require students to define boundaries to their research tightly. This is positive, since it bounds the PhD project to answer, for example, specific questions about certain datasets, theories or groups, rather than feeling open ended. Defining these boundaries through your exploration of what others have done and what is within your scope to do, enables you to know when the research for your PhD is finished.

    In the next blog, I’ll share questions that can help PhD students step toward having an impact with their research. Every PhD project has an audience!


Design a site like this with WordPress.com
Get started