BredowCast Nr. 95: Digitalisation Research Seminar: Exploring Digital Freedom Transkript Kristina Kobrow: Welcome to the BredowCast, the podcast of the Leibniz Institute for Media Research, Hans-Bredow Institute. My name is Kristina Kobrow and as I welcome you to this new podcast episode, Christiane Eilders, the Director of the Center for Advanced Internet Studies in Bochum, welcomes researchers to this year's Digitalisation Research Seminar, in short: DigiSem. This is the conference that I would like to talk about in this episode, and because it is being held in English, this episode is also in English, just for a change. What is the DigiSem? Dr. Nina Hahne: DigiSem is an event for doctoral and postdoctoral researchers, and it is hosted by four German research institutes, and they all have a focus on the digital transformation. We want to provide our researchers with a platform for presenting and discussing their research projects and, of course, also for receiving feedback from experts from the institutes and for general networking and having a good time. Kristina Kobrow: That was Nina Hahne, one of the four organizers, and the four institutions Nina addresses here are: The Center for Advanced Internet Studies, the Bavarian Research Institute for Digital Transformation, the Weizenbaum Institute and the Leibniz Institute for Media Research I Hans Bredow Institute.  There is also a topic for the two-day seminar: Digital freedom. Maria Staudte, also one of the organizers, explains. Dr. Maria Staudte: In the selection of this topic, we were inspired by the topic of the Science Year 2024 and the topic is “freedom”. The Science Year is an initiative by the German Federal Ministry for Science and Education, and they start a discourse on a current or pressing issue each year. And so, freedom is something that we've interpreted for the digital sphere, and it has, of course, many great opportunities. But freedom can be challenged and maybe even endangered by different developments on a national but also an international scale, such as the growing influence of autocratic systems and political polarization everywhere. And as our institutes also understand themselves as platforms for discussions on current societal transformations, we wanted to have a look at this topic from the interdisciplinary perspective that we have and bring together all these different aspects. Kristina Kobrow: Digital freedom can imply many things. For Maria herself, digital freedom means using services without having to give her data and being tracked. This is one possible approach. For the seminar's participants, digital freedom means… Seminar participants: The equal access. To be able to use or choose digital services and devices that are not based on extractivist production and where the whole supply chain does not drip from blood.  The ability to navigate and freely participate in online spaces without fear of harassment, online abuse of any kind. Transparency.  The freedom of expression. To not be afraid to end up in prison as my friends in Belarus did.  To use technology as I please. Not to have to sell my soul to get basic things like food or a right to go from A to B.  Kristina Kobrow: Digital freedom has many different faces. It is about negative freedom and thus the absence of external restrictions or constraints. And also positive freedom, the freedom to act in a self-determined and active manner. The DigiSem brings many ideas of freedom to light, precisely because the researchers themselves come from different countries, from different disciplines, different universities and with different research topics. They focus on social and political responsibility, journalism, art, the economy and the healthcare system. It's about opportunities, challenges and the question of autonomy.  When a small group begins to discuss whether more autonomy is always better, whether digitalisation promotes democratisation and whether participation should require digital access, I spoke with four of the participants about their current research projects that contribute to the topic of digital freedom. One of them is Rainer Rehak. He studied computer science and philosophy. As a research assistant at the Weizenbaum Institute in Berlin, he works on civic technology, better known as “civic tech”. He can best explain what this is himself.  Rainer Rehak: We usually use the terminology in a way that means there are communities that employ technology or that form themselves around a certain technology, they develop it, or they take it for other uses and try to follow civic causes, like causes of public interest, sometimes political matters, sometimes social matters. One really political example, for example, is “Safecast” or “TDRM”. Those are two projects who want to do the same. One is originated in Japan, one originated in Germany. And it's a radiation monitoring device you can build yourself and then the data will get fed into like a central database and can then be evaluated. And the idea behind it was that in Japan and in Germany, we know there are some critical voices regarding nuclear power and especially nuclear power plant accidents. And some people didn't trust the official sources for those kinds of radiation levels. So they built those devices, and now we have, since several years, we have, you could say, an independent network of devices measuring those radiation levels. So, it's quite a critical topic. But now we have, you could say, a second opinion regarding, you know, those kinds of radiation levels. And it's, at the beginning, those kinds of projects were really criticized and maybe laughed upon. Why would you do this? And now some of those emergency response teams and security agencies sometimes ask to access the data because it gives them more insight about the issue as well. So now there's this kind of public agreement or even official agreement that it's actually useful to have that. And maybe a second case is “Luftdateninfo”, which was measuring the air quality in German cities. Well, again, it's a very highly political topic, for example, like the dust, the fine dust question, for example, coming from car traffic, whereas surprisingly in Stuttgart, there were only a few official sensors. And this team or this kind of community, they thought, isn't it interesting that in such a car city, there's only two or three of such sensors. So we build them ourselves. And then they found that in some streets, the values were quite off limits. So you could now question: Why was that the case? But now we have a second opinion about this. We have a kind of a self-built device. And now we can have another public debate about those values because there's not only one source and the source might have their own interests as well.  So that's kind of two applications to show what civic tech can do.  Kristina Kobrow: Rainer's comments show that civic tech projects offer real opportunities. They enable pluralistic perspectives and public participation. Basically, they show how democracy works in practice. But, Rainer continues, not all of the projects that were once successful, maybe even considered as flagship projects, still exist today. Rainer Rehak: Well, of course, we usually start the conversation with the ones that worked and that had an effect. However, for everyone that worked, there are several who did not work in a way of they don't exist anymore. And now we can ask the question, and this is a question that my colleague actually somehow shaped, Andrea Hamm: They have been failed, but maybe they're still successful by first showing us something and maybe leaving something and also letting us learn something about their fail, about the reasons why they failed. And the reasons they are usually analyzed are the internal reasons in the organization, in the community. Did they have some kind of wrong internal communication or the coordination didn't work, or they didn't have enough volunteers?  But the interesting part, and this is what I am focusing on in my work, is: What are the actual framework conditions that led to a lot of those projects failing? And a lot of times it's, I would say, at least judging from a project in Germany, like “kleineAnfragen.de” or like “Open Parl”, where the project was really good, and it was really valued by public officials, by the general public, but it was not funded. And it became more popular, but not more volunteers were there to help. And there was no money from the public side. So at some point the volunteers got burned out and stopped the projects. And there are many more like this. And we see those people volunteering, I just meant to make it more clear, that's why I exaggerate a little bit, it was not their fault, it's just there was nothing how they could deal with the success of their project. Kristina Kobrow: It becomes clear that it is not always internal operational deficits that lead to the failure of a civic tech project. Rather, it is about structural deficits and external conditions. But the crucial question is, of course, is there room for improvement?  Rainer Rehak: Well, first, one insight is, there's no easy solution. The first step is to understand technology and participation, there's no automation. We can't throw like the internet or sensors onto society and then there will be more participation. We have to think less technology-centered and more community-centered. What does a community need? When can people afford to participate? And those are not technical questions. And if we want to get the best out of the potential of new digital technologies for participation, we first have to focus on the participation part to then unfold those digital potentials. This means we need more resources on the public side. We need partners at local administrations who can be addressed, that they can say, hey, we need this more to maybe, I don't know, to map trees or to check the health of bees or something. We need some contact to the local health authorities or the local nature authorities. But they also need to make time to have this interaction and support the groups. Maybe we need some longer time funding for those projects who are really successful. Not only the initial funding, and then we say it's a success and then three years later no one knows about it anymore. So this kind of long-term thinking. But also in a way the willingness in administrations to say, we do our things here on the desk, but there are people out there and they also do really good work. So maybe sometimes we go out to ask them and integrate them into our processes. That might be cumbersome sometimes, but it's also first helpful, but second, this underlines that we live in a supposedly participatory society. And this is on the ground what it actually means to have the limits between administration and citizens to somehow let them, let it get more lucid in a way.  Kristina Kobrow: These are very important aspects that Rainer is addressing. And it reminds me a little of the debates about the so-called citizens' councils. Are they an opportunity for democracy? Or is it even a risk when citizens take over tasks from politicians? Will democracy ultimately be strengthened or weakened? The real question here is probably also the question of the right involvement, the right addition and not whether it is a replacement. And of course, the question of the long-term perspective. What do we really need? Which projects, which processes, which technologies? This question arises not only for us as a society, but also for each individual. And sometimes, in view of the rapid technological developments that are becoming mainstream, the question turns into, what do they have to need?  This is the question that affects Marco Lünich. Marco is a substitute professor at the Chair of Communication and Media Studies at Heinrich Heine University in Düsseldorf. And he deals with generative artificial intelligence, such as ChatGPT. Marco explains that generative AI does not necessarily lead to a feeling of gaining freedom among students, but rather to a feeling of loss in freedom, autonomy and well-being.  Dr. Marco Lünich: Students react to the introduction of, for instance, ChatGPT or all those tools of generative AI that have been released lately. So they have to react to it, if they want to or if they don't want to. What I mean by this is, they see other students using ChatGPT and maybe reaping certain benefits from it. So even if they don't want to use ChatGPT, they have to make a deliberate decision whether they want to do this or not. And if they don't, they have to somewhat bear the consequences of them not using ChatGPT, for instance, because others do it and will do it.  Kristina Kobrow: This stress phenomenon, the feeling of having to use a technology because others are doing it too, is known as “technostress”. According to Marco, technostress can manifest itself in various ways: In cognitive overload, in a feeling of futility because academic and career prospects are perceived as weakened, or in a fear of being overtaken by other students.  Marco conducted a study of around 1100 students to find out more about the dimensions of this phenomenon. His results show that students perceive a comparatively low cognitive overload, but a higher perception of academic obsolescence and a higher fear of being overtaken by peers. Proactive strategies in curriculum development to mitigate technostress and build student resilience are therefore becoming increasingly important, says Marco.  Tools like ChatGPT are, however, still relatively new.  I therefore ask Marco whether the perceived technostress might decrease once the students have become accustomed to the technology, or whether the perceived technostress will increase because the technology keeps improving and the students' fear will grow that they will no longer be able to cope without the use of technology. Dr. Marco Lünich: What I would argue is that, keeping in mind the AI hype cycles, I think it's a little early to make the call, but I would assume that many of the fears that are associated, they may not materialize because we always have managed to constructively include technology in the workplace. But we have seen with other technologies that there are certain downsides and detrimental effects that will come true. What it does, especially with regards to technostress, is that it adds to the complexity, and it adds to a student's learning experience because there's a new tool that they may be even required to use in certain fields of study. So we add technology. We add the necessity of gaining new skills, technological skills, and I think this will not stop. There will be even more tools that students may have to learn, that they at least have to know about, even if they're not proficient in using them. And in general, we're adding complexity and at some point, other things need to be reduced for that. So especially when it comes to the educational experience of students in high school, we always have this discussion: What do they need to learn? And we need to ask ourselves the same question when it comes to students in higher education, because we're teaching them to be efficient learners, to autodidactically learn things after they have left university. So, we're teaching them how to learn. And for that we're introducing many more tools. It's the Wild West out there, every week, some new tool arises, and I do not really see a way out there, but we need to find a way to manage this constructively. And I think this is easier said than done.  Kristina Kobrow: Marco will certainly be right. The question of how technology influences human autonomy—whether it empowers us, enhances our well-being, or instead restricts, inhibits, and even intimidates us—requires continual reevaluation and ongoing self-reflection. Marco spoke about the “Wild West”, alluding to the many new technological developments and possible applications. Fay Carathanassis and Steliyana Doseva would probably think of something else when they hear this metaphor: Social media. The two researchers at the Bavarian Research Institute for Digital Transformation focus on insults and hate speech, and thus a major challenge, even risk, to digital freedom. According to Fay and Steliyana, there are already many empirical studies that focus on the effects of insults and hate speech for users, or on alternative methods to counter it. So their own research takes a different approach. Steliyana explains… Steliyana Doseva: Well, our aim was at the beginning to find out more about what people know about reporting mechanisms and how they experience them. And then we also wanted to look at the different parts and the different types of insults on social media and to provide more information about how to classify the different types of insults according to the legal perspective and to involve this legal perspective in future research.  Kristina Kobrow: To find out more about the actual use of reporting options among users, the two conducted a study. And these are their results… Steliyana Doseva: We interviewed 5200 people. These are social media users in Germany, age 18 and above. And our main findings are that people are actually not reporting insults that often. They experience them quite often on social media, but more often in the real world. When we say real world, now we mean offline. So, insults and, let's say, hate speech in the broad term is actually not an online phenomenon, but also an offline issue that we also have. And we also looked at the reasons why people are actually not reporting something when they don't report it. And we found out that, well, they sometimes just see a lack of prospects or a lack of any options because they don't think that the platforms are going to help them and they're going to remove the content, or they don't think that the state is going to help them, which we consider as a big issue.  Kristina Kobrow: Steliyana's explanations show that it is not just about laws and regulatory options on the platforms that are available to users. If people do not believe that they will be helped by the platforms and the state, then a remedy is needed. Fay concludes…  Fay Carathanassis: Considering the first results that we have from our study and interviews that we conducted, we would suggest faster proceedings, for example, for certain minority groups that are especially affected by insults on social media and hate speech content, for example, politicians or political active people. There are initiatives in Germany now from certain states, for example, the Bavarian state, to improve that and make it faster. But also, what we suggest is perhaps to also secure that you can identify the person who insulted you, which of course has to be in accordance with the right to remain anonymous online. And furthermore, what we also suggest is that there is more cooperation between the platforms and state entities. If the user wishes that evidence is secured and provided directly to the state authority, that the platform has to secure the evidence, like save the insult, even if it's not shown. That's already in case. And in some cases, the platform is even obliged through the Digital Services Act to further posts to state authorities, like to the police or the persecutor, if it's very offensive and severe criminal posts. But in general, it's not obliged to do so. Perhaps we can make the cooperation between the platforms and the state entity more direct, make it better.  Kristina Kobrow: Fays proposals are very specific. If implemented, we will be a step closer to digital freedom. If proactive strategies were consistently incorporated into curriculum development to mitigate the so-called technostress, as Marco said, and if there were also a long-term financial perspective for civic tech projects and better opportunities for cooperation with various stakeholders, we would come even closer to digital freedom. Because the focus would be more on people. But only in specific contexts and the conference has revealed many more. But what further, more abstract steps would be necessary to shape digital freedom together? This is what the seminar's participants had to say… Seminar participants: As researchers, we need to ask and answer more why questions. Why is the result good? Why is an accuracy score high? And report all the data that we have.  I think that we should start with an equitable education. There needs to be an ongoing debate: What do we want technology to be? How do we want technology to affect our lives?  Working together is one way to do it, more of a collaborative way. Because at the moment we are competing and accelerating and competing more. But that's going to be a race to the bottom. Rather, I think there should be a more collaborative approach between nation states and between people to people.  Kristina Kobrow: The Digitalisation Research Seminar also demonstrated the importance of cooperation, learning from and discussing with one another. It became clear that digitalisation does not necessarily lead to more democracy, autonomy or self-empowerment, nor does it guarantee long-term social empowerment. Digital freedom is not pre-programmed but depends on human responsibility. And it is precisely this assumption of responsibility that is perhaps the greatest opportunity. But, of course, digital freedom also means repeatedly detaching oneself from digital devices and research. In the evening, the researchers come together for dinner.  That's it with the BredowCast for today. The next DigiSem will take place in two years' time. And next year there will be an online conference, the so-called “DigiMeet”. If you want to stay up to date, please check the websites of the institutions linked below or our LinkedIn profile now and then. Until next time, bye-bye.