The Problem with Blockchain

Animation by Rebeka Mór


The Problem with Blockchain

Our programme director Francesca Bria talks to Evgeny Morozov about how digital technology can be democratized.

Morozov: You’ve spent two decades trying to make digital infrastructures more “decentralised.” We’ll talk about these experiences shortly but just to get the basics out of the way: given that vast experience, are you persuaded by the “decentralisation” agenda of today’s proponents of Web3/crypto?

Bria: Unfortunately, the Web3 agenda seems quite similar to that of Web2. The promises, paradigms, narratives, and business models are not so different as they seem. At the beginning, Web2 promised an open digital economy with the potential to democratise the Internet and digital cultural production, becoming more social and challenging incumbents’ dominant positions. What happened was the opposite, leading to the unprecedented concentration of market and infrastructural power by the Big Tech.

Today, Web3 and decentralised finance, or “DeFi,” promise to nourish a grassroots movement of cyber-hackers that will decentralise ownership of the real economy. Yet, they often end up reinforcing market concentration and fuelling rampant speculation. That world is often ideologically resistant to regulation and poses risks to the wider financial system, with the $2.5tn-worth of crypto assets in circulation ending up in the hands of fewer (and more centralised) players than we think.

The attempt to build decentralised privacy infrastructures at the network and application layer, such as distributed ledgers and crypto protocols that run on Ethereum and other blockchains, does seem interesting and genuine. This attempt to fix the infrastructure, however, doesn’t take into account the broader challenge of reclaiming control and sovereignty over the entire technological stack, including next-generation connectivity (5G and space connectivity), quantum computing, and the critical components in future value chains such as microprocessors and AI chips.

People pushing the Web3 agenda have learned very little from the experiences of all the other movements, from free software to Indymedia to the rise of digital democratic cities.

This purely technical effort at decentralisation also falls short in thinking about the political and social institutions that are needed to take full advantage of this decentralisation. The big questions that I – and you – have been raising over the past decade, with regards to the political economy of data and infrastructures, of technological sovereignty, of the geopolitics of the stacks, all seem to have dropped the agenda completely.

What is being “decentralised” is the ability to extract value and make money, incentivising even further the financialisation of social behaviours. Worse, it seems that people pushing the Web3 agenda have learned very little from the experiences of all the other movements, from free software to Indymedia to the rise of digital democratic cities, that did try to build a more decentralised and democratic digital sphere.

Let’s trace some of that history by focusing on your own career trajectory. Most people think of you as part of the policy establishment. Yet you started out with grassroots work in the worlds of media activism, open knowledge, and free software. Could you tell us about those early experiences and how they – and their limitations – shaped your thinking about technology policy?

In the late 90s, the movement advocating for free knowledge and free software merged with the anti-globalisation social movement, spawning initiatives such as Indymedia, the world's first truly global citizen journalism website. It was also in that early period of open Internet experimentation that some of the technical innovations that we would later call “social media” and “Web 2.0” were born, albeit with a completely different spirit.

As we know now, those early movements didn’t manage to sustain and grow, and it’s important to analyse why. I did sense that something was amiss in how we – and I speak as someone who was part of those pioneering initiatives – thought about politics, information, and power. In that period I became convinced that in order to be sustainable and scale, in order to lead to more systemic innovations, these bottom-up initiatives had to be recognised as critically important attempts at democratic experimentalism. They were important and had to be sustained with public policy interventions, both in terms of regulation and public R&D funding.

It was those efforts to make sense of our failures to take seriously the role of public policy and of the State that led me, around 2005-2006, to start working with the European Commission, with the Regional government of Lazio (my home region), together with governments in Brazil and, later, elsewhere in Latin America that served as inspiration for my work.

Many of those efforts were driven by the desire to learn from the experiences of Project Cybersyn in Chile and imagine what a modern European equivalent of it would be. As we started formulating plans for genuinely European alternatives to Web 2.0 – this was what D-CENT and Decode, the European projects that I led in the 2010s, were about – we were consciously drawing on the Latin American experiences.

Before we get to the Latin American chapters, could you say some more about the most formative experiences in your early career as a “free and open knowledge” activist?

I was strongly influenced by the hackerlabs (and what we would later call “makerspaces”) and their understanding of digital knowledge and software as a common good. These were community spaces that during the late 90’s were located inside community social centres or squats, even though sometimes they also functioned independently like Internet cafes. They offered free access to Internet connectivity and some basic infrastructures, technological assistance and basic digital skills classes to those who needed it, from ordinary citizens to artists and from activists to immigrants.

There was always a way to make these new technologies serve the social needs of the communities. These open labs were staffed by hackers and software engineers who had knowledge of these emerging technologies; they spoke the language of technological autonomy, popularised privacy and encryption tools, and provided the much-needed technical support during important gatherings (such as those of the so-called anti-globalisation movement).

Thus, in the late 1990s / early 2000s, I felt like I had joined a movement that was able to politicize technology. It saw technology as an infrastructure, as a knowledge tool that could empower communities, with a vision of a more inclusive and innovative future. We did want people to produce their own content and information – the motto of Indymedia was “don’t hate the media; become the media'' – and the hackerlabs provided the assistance and the infrastructures that made such autonomy in the field of content production possible.

We were exploring and owning the new territory of digital and network culture. The hackerlabs aimed to appropriate and, in some sense, socialise the critical knowledge behind technology. Inevitably, such efforts sparked a lot of conversations about things like intellectual property: who owned this technology, how one could share it, on what terms, how we could sustain the creators, and so on. The issue of licensing came to the fore, as, to truly serve the agenda of autonomy, the software had to be studied, reproduced, copied, and shared. It’s in this period that the Creative Commons licensing model was born and popularised by Lawrence Lessig.

You were involved in much more than just technical debates about licensing at the time, right? You were also producing content of your own…

Yes, at that point, I was also involved in the production of creative content, such as digital videos and independent documentaries. We were coding videos manually, working on the first open-source video archive. It was eventually launched with an open-source video code called new global vision, which was hosted by ECN (short for European Counter Network), an important independent service provider in Italy. We did that way before the BBC came up with their own iPlayer. This was also before YouTube and other Web 2.0 publishing tools.

This radical technology and hacktivist network was active in Italy with projects like ECN, Autistici/Inventati, and Indymedia; in Spain with projects like sinDominio; and in Germany with projects like the Chaos Computer Club – which were very connected to social movements. So all of us who were engaged in content production were also engaged with social and political activism, on issues like migration or social justice, feminism or the environment. Later, this all morphed into the anti-globalisation movement.

During this period I traveled a lot around Europe and also in Latin America. For example, I was in Argentina in 2002 as the country was in a deep political and financial crisis. That’s where I met Naomi Klein, who was working on her own film at the time. I was working on a film named Argentina Arde, about Argentina’s unemployed workers, or piqueteros, in the context of the Argentine financial crisis and the antiglobalisation movement. The film was eventually distributed by the now defunct Italian newspaper Carta.

Between 1998 and 2002, you lived in Amsterdam and it seems to have had a big influence on your subsequent trajectory. Why was it so important?

Amsterdam was a reference point for the free knowledge movement for two reasons. First of all, in the late 90s Amsterdam was a very vibrant place, both culturally and politically. A liberal, open city, it was also a home to many NGOs and foundations, including those of the more progressive variety, fighting for social justice and against climate change. There were a lot of activist organisations that were critical of the IMF and the World Bank – I’m thinking, for example, of the Transnational Institute and ASEED Europe. Many of these NGOs also had public money to spend. This attracted a crowd of very young and bright activists, people who were just out of university and who wanted to make a difference but not in a conventional desk job.

The other reason is that in the 1990s Amsterdam had some public policies that were favouring the transition to the digital society. They had good multimedia academies, pirate radios that were legal, well-funded local TV cable channels. They had public spaces like De Balie that were promoting the emergent digital culture with festivals and conferences such as The Next Five Minutes, which popularised concepts such as “tactical media.” Hackerlabs were everywhere – often, right next to the coffee shops – and they were staffed by very good technical people. You could go there and learn everything about computers, about how to use them, how to navigate the Internet.

And, of course, the Internet itself was everywhere. As a result of clever public policies, it was also cheap or entirely free. They also had independent Internet providers like XS4ALL that did have this progressive view of communication. There was very little of this kind in Italy – at least in a world beyond the hackerlabs that were still perceived as “underground” culture.

When I got to Amsterdam in 1998, there was a lot of buzz about the pioneering project of the Digital City; this was an important first effort to have a public debate about how to provide new rights and services for citizens in the digital city. These debates would eventually continue in the new institutions like Waag Society, which was something of a European public equivalent to the private and commercial world of the MIT Media Lab, but also to mailing lists like Nettime.

You joined Indymedia early on. What drew you in and what were your initial impressions?

In 1999, during the anti-WTO protests in Seattle, the technical community started to create platforms and tools through which the social movements could express themselves and create their own media. Later, this would be called “citizen journalism” and “user-generated content” – and it would be almost completely subsumed into the rhetoric of Web 2.0.

From the outset, Indymedia was a platform that had editorial teams distributed all around the world, each of them with their organisational structure rooted in different national states and individual autonomy. They shared some principles and rules, including how to control and moderate the content posted by the users that was misogynist, racist, or discriminatory. It started with text. But soon one could also publish photos, videos, and audio. Imagine what it meant to publish and distribute video on an open platform in 1999!

So Indymedia became like a meta platform that connected all the independent media and all the independent journalists and content production communities together with the technical community, the hackers, and the social movements. This went really big in the early 2000s, because then, after Seattle, the anti-globalisation protests started to spread all around the world.

Many of the members of Indymedia’s technical community who created the open publishing protocol were from North America. They went back to the US and ended up working for companies that would eventually become big Web 2.0 names.

There is an interesting and mostly unknown connection that ties Indymedia to Web 2.0…

Yes! Many of the members of Indymedia’s technical community who created the open publishing protocol were from North America. They went back to the US and ended up working for companies that would eventually become big Web 2.0 names. The story of the podcasting firm Odeo – where Twitter was born – is emblematic, for, along with Jack Dorsey, it also employed Blaine Cook and Evan Henshaw-Path, who not only participated in Indymedia but were also key to the development and further adaptation of TextMob, a text messaging software used at many protests, which would go on to inspire Twitter.

Soon, by 2005 or so, some of us realised that in order to produce high-quality content communities like Indymedia, we needed much stronger, sustainable structures. Indymedia, for example, was completely volunteer-based; the majority of people didn’t have a salary. Donations, while helpful, didn’t allow us to develop bigger plans. We started to debate these issues internally.

There were, predictably, opposing camps on these issues. Some linked it to the question of financing public goods, understanding that the pioneering infrastructures that powered Indymedia could also be funded by governments as innovation for the public interest; this whole debate could be connected to the broader struggle of access to information and digital rights. Others obviously opposed any state involvement, and later turned to the private sector for funding.

Who were these opponents?

For the most part, the North Americans preferred the private solutions, and would then move to Silicon Valley and work with Web 2.0 firms, thinking that, via social media, they were democratising the ability of people and communities to have a say and publish their own content – outside of most regulations that existed at the time. Some of them were libertarian; some were anarchists. They never looked at the state and the public sector as a solution, viewing it, mostly, as a censor or an enemy to fight.

I think they found more freedom, more resources, and definitely more money in the venture capital scene in Silicon Valley. So they founded all these companies, which, over time, became very different from what they were at the outset; the founders left after a few years, and the companies were then acquired by bigger players.

So, you were clearly in the other camp – the one that did see governments as potential enabling actors rather than enemies to fight. Interestingly, it’s by studying the Brazilian experience that you also found some interesting policy suggestions for Europe. Could you explain how exactly that encounter with Brazil happened?

I always thought that we needed to influence public policies and create pioneering policy instruments that could support emerging digital infrastructures as well as make the tools of content production cheaper and more accessible. Brazil provided an inspiration for this, since Lula’s ascent to power was welcome news to many of us in Europe. I went to one of the first World Social Forums in Porto Alegre; I do remember seeing many intellectuals who were influential for the movement, such as Noam Chomsky, but also Richard Stallman, Larry Lessig, and Vandava Shiva there.

The themes of free access to knowledge and connectivity, and the reform of the global intellectual property regime, were finally given the attention they deserved. There were also a lot of discussions about things like participatory democracy and open budgeting – and that did seem very exciting to us in Europe.

So, under Lula, Brazil was a big progressive player – also in all sorts of international and global debates and institutions. Around those years, the World Summit on Information Society was getting into gear and I remember attending its first big gathering in Geneva in 2003. I was there as part of the We Seize! activist campaign (“We Seize” is a play on WSIS) to demand more accountability, transparency, and democracy during the Summit negotiations. Two years later, there was a follow-up summit in Tunis and it was extremely important for my subsequent work, both in Brazil and in Europe.

How so?

I was in Tunis as an independent journalist covering the summit (I was freelancing for the Italian newspaper Liberazione). One of the most exciting people present there was the Brazilian minister of culture, Gilberto Gil. Gil, of course, is known as one of the country’s most famous musicians, very much involved with the Tropicália movement, a collaborator of Caetano Veloso and much else. Gil, who served as minister of culture for five years, did take some rather radical steps to promote the agendas of free knowledge and free software.

First of all, he – like the rest of the Lula administration – was not afraid to pursue an adversarial policy towards big technology companies, and insisted on reshaping government policy, including on procurement and around free and open source software. Free-software advocates and prominent professors like Sérgio Amadeu were now working for the government.

Second – and this was very important to me – Gil launched a program called Pontos de Cultura, which was a government effort to promote bottom-up access to and production of information and knowledge, including from the favelas and parts of Brazil that were previously excluded from the sphere of culture. This did have some similarities to what we were doing with hackerlabs back in Europe but, in Brazil, this was not limited to hackers and activists.

Rather, this was part of a big government push, with Gil and the ministry driving much of this effort to democratise access to culture produced by means of digital technologies. There were also parallel efforts to boost digital inclusion and digital literacy and those were all quite radical and ambitious programs, including large public investment in digital infrastructures and connectivity for open public broadband, and digitalisation of public administration services using free and open source software to promote technological autonomy aligned with their national development goals.

As I understood what was happening in Brazil, I immediately started thinking of how to apply these experiences in Europe. First, the focus was regional; by then, I was working in the administration of the Lazio region and I was instrumental in establishing a collaboration program between Lazio and Brazil within the European digital society framework, so there was some effort to translate Pontos de Cultura experiences locally in Italy. I wrote a paper about this analysing the effort made.

How did these efforts go beyond Italy?

This has to do with the other important person I met in Tunis – an Italian official at the European Commission called Francesco Nachira (he would retire from the commission in the early 2010s). In 2002, Francesco published an important paper on “digital business ecosystems,” which made the case that Europe, instead of building giant national or pan-European champions, should rather be investing in creating knowledge and technological infrastructures that could allow smaller companies not only to build cutting-edge products but also to build on each other’s discoveries and innovations.

Nachira would build a whole research program around this insight, culminating in an important 2007 book. By then, he had this very appealing theoretical but also practical vision based on a biological metaphor of the ecosystem, which saw business and technological spheres – what today we might call “infrastructures” – interact in such a way as to produce mutually beneficial results. So the idea was that by promoting free software and open source technologies but also all sorts of open knowledge infrastructures – we would later talk about them as the “semantic web” – Europe could still preserve its technological and industrial autonomy even without having big technology giants.

Perhaps, it would have even been a better path, as, with smaller players, one could achieve better and more democratic governance as well as have an economy that was not just about competition and profit but also collaboration and cooperation. There was also plenty of research that showed that SMEs were not always acting to maximise profit and pursued other goals. Nachira believed in cooperatives and smaller enterprises; one could say that there was much appreciation for decentralising economic activity, following the Italian tradition of territorial industrial districts, with learning economies and their specialisation tailored to local needs.

It’s hard to believe this today but many of Nachira’s papers and presentations from that period – published under the auspices of the European Commission – cited Project Cybersyn in Allende’s Chile as inspiration and as something that, by drawing on advanced but decentralised ICTs, should be recreated in Europe. (Project Cybersyn was somewhat better known in Italy due to the appearance of an important 1980 book about it.)

There was a lot of very interesting theoretical research that informed Nachira’s approach: he drew on general systems theory, cybernetics, constructivism, and various theories of cognition to make his points, citing Stafford Beer, Fernando Flores and Terry Winograd, Varela and Maturana, Bateson and Piaget. He explicitly rejected the game theoretical approaches that were popular at the time (ironically, it’s precisely game theory that is so much in fashion with the Web3 crowd these days).

How did Brazil fit into all of this?

Nachira knew of what Gilberto Gil was doing in Brazil and he wanted to meet him and see if some kind of bridge could be built between Europe and Brazil. Pontos de Cultura seemed exactly the kind of government-driven effort to build a digital ecosystem of free knowledge and technology that could then have major societal and economic effects… a sort of public policy for digital sovereignty, we would say today.

It fit very well with Nachira’s program and we wanted to test whether something like Pontos de Cultura could be launched in Europe as part of the Digital Ecosystem agenda. I soon started working closely with Nachira, becoming one of his liaisons to European Regions and to Latin America. I, then, also did a brief stint as an expert in the European Commission – working on smart cities, ecosystems and living labs while I was working on my PhD at Imperial College in London.

Nachira’s efforts, while initially lauded by the Commission, soon ran into some opposition and he took early retirement (also due to health reasons). The spirit of those programs, however, lived on and soon – in 2012, I think – the European Commission launched CAPS (short for Collective Awareness Platforms for Sustainability and Social Innovation). By that time, I was finishing my doctoral studies at the Imperial College in London and was also collaborating with UK’s innovation agency, Nesta, and its incoming director, Geoff Mulgan, was very enthusiastic about having me apply for CAPS funding on Nesta’s behalf. This is how my first big European project, D-CENT, was born.

Just one more point on Nachira here: around 2014, after he retired, we did go to Ecuador, working with people like Andres Arauz there, also in hopes of getting Latin America to pursue the kinds of policies that we found so difficult to implement in Europe. There was quite a positive reception to what we had to say but Latin America, too, soon entered a crisis, so many of these ideas remain to be realised.

Could you explain the logic behind CAPS and also how did D-CENT, your first European project, fit into it?

The idea behind CAPS was to promote all kinds of new and emerging technologies that were empowering people and communities – e.g. decentralised protocols, distributed networks, technologies for democratic participation. At that time, the European Commission was funding technology for the sake of technology. Fabrizio Sestini, the intellectual architect of CAPS, understood that the social purpose had to come first. And he wanted to create a program driven by what communities could do with technologies to serve their needs, with a very strong environmental focus as well (once again, we are back to the kind of impulse that drove our work in the hackerlabs back in the late 1990s and later in the Pontos de Cultura model).

So I decided to apply for CAPS, with a project focused on reducing the dependence of social movements, NGOs, municipalities and most other political institutions on the platforms of Web 2.0. By that time, it was impossible for any of these entities to engage in activism or enable deliberation among their members outside of the platforms such as Facebook or YouTube. We basically had no autonomous platform where this could be done, so everyone took to these big platforms. So, D-CENT, in a way, was conceived as a way to address this problem.

Was there some other bigger philosophy underneath?

On a very general level, what we tried to do with D-CENT was to develop platforms for large-scale democratic participation, with a focus on democratising the party form – actually to strengthen the party form – and on making sure that democracy could be fitted to the 21st century, giving more power to citizens without, however, believing that parties as such were over. We wanted political parties to find new ways to interact with their members, while enabling social movements and organisations to effectively use digital technologies to empower collective action.

What came out of it, as we already said, is a set of large experiments that resulted in a democracy platform that is now used to deliberate on a pan-European scale instead of, say, a private alternative Facebook. Our platform was funded by European taxpayers and later picked up by the city of Barcelona and many other cities, and we are proud of it. It's open source, it's privacy-preserving, and it's accountable. We can't manipulate the data, we can’t sell the data, we cannot manipulate the opinion of the citizens. It shows that the technology that's built and constructed in a bottom-up way, by communities of citizens, can become a pan-European platform. Such small regional projects can scale and be robust. This is yet another proof that Nachira’s original vision of the digital ecosystem was correct, and that you can develop technologies that are democratically owned and controlled.

Like all European projects, D-CENT was a joint effort of many different organisations, right?

We had some interesting partners from the very start. Harry Halpin was already working with Nachira who, by then, discovered his academic work on distributed cognition. Harry was working with Tim Berners-Lee at the time, on standardisation. There were other people – e.g. Blaine Cook (of the Odeo-Twitter fame) who was working on decentralised identity management as well as Evan Henshaw-Path (also of that Odeo-Twitter contingent), who, by then, was a CEO of a lean UX startup.

So while there was a very strong technical component to our consortium, we also knew that what we were trying to solve was not just a technical problem. Thus, we wanted to have real movements and institutions as part of the project. We had the Pirate Party in Iceland, as they became an important force in the country and conducted the first experiment with direct democracy at scale; the mayor of Reykjavik at the time, Jón Gnarr, was very much involved with what we were doing. The same thing happened in Finland, where we worked with Forum Virium, the ICT organisation of the city of Helsinki; our point person there would later become the technology officer of the city of Helsinki.

With our backgrounds in Indymedia and other social movements, we wanted to reclaim this space from Facebook. We wanted to give these social movements a stable, reliable, scalable digital infrastructure that would also protect their privacy.

At the time, all the talk was about Indignados in Spain. I knew some people around Manuel Castells’ research group at the UOC Internet Interdisciplinary Institute (IN3) in Barcelona, especially Javier Toret Medina, who was also active in Indymedia in the 2000s. They were doing some cutting-edge work on theorising these new tech-enabled social movements. There was some shared reluctance to accept the standard US narrative that Facebook and Twitter were the tools that revolutionaries used and that we should just accept that, following the Arab Spring, all the mobilisation would happen there.

So, with our backgrounds in Indymedia and other social movements, we wanted to reclaim this space. We wanted to give these social movements a stable, reliable, scalable digital infrastructure that would also protect their privacy. But we also wanted to understand what they need in the political process and empower them with the means to debate, organise, make decisions, and eventually govern. The “governing” part proved very important later on, as many of us participating in the project – including me – were called to take on various government positions, mostly in cities.

We also had another partner in Spain, Medialab-Prado, which at the time was building a tool that was starting to be used by Podemos, and ended up as the main participatory platform of the city of Madrid. It would eventually form the backbone of Decidim, the tool for deliberation and governance that would eventually be used not only in cities like Barcelona and Helsinki, but also at the pan-European level, endorsed by the European Commission, the Parliament, and the council of the Conference on the Future of Europe.

So D-CENT had this important dimension of creating an alternative platform for mobilisation, deliberation, and governance. But there was also an important dimension looking at social digital currencies. How did that come about?

This part – on economic empowerment, money, complementary currencies, and the potential of crypto-currencies – was inspired and influenced by Denis “Jaromil” Roio, whom I got to know in the hacker scene in Italy in the late 1990s, when we were both very young. Jaromil came from an interesting background, one part of him was rooted in the world of arts and culture, the other one in the world of free software. He was also active in Amsterdam, working in MonteVideo, an important cultural institution in the city. He was busy developing dyne:bolic and other creative projects based on Debian and Ubuntu. Inevitably, he became an early expert on Bitcoin, so we watched that space very closely from the very beginning. (Note from Evgeny: I remember running into Max Kaiser, one of the loudest Bitcoin maximalists today, at a D-CENT event in London in December 2013!)

Ok, so Jaromil was following this early on. But what was the rationale of even looking at these currencies and digital money?

On the one hand, D-CENT wanted to promote decentralisation in the sphere of political participation, i.e. we wanted citizens to be consulted when it came to decision-making – much like they were consulted when it came to participatory budgeting in Porto Alegre. On the other hand, we wanted to solve the question of the concentration of economic power and somehow link it to Nachira’s ideas about digital ecosystems. So we wanted to promote economic empowerment and decentralisation as well.

Ok, so imagine if you do a big collective participatory budgeting project. But how do you fund this? And how could you fund many of these initiatives given that this was also the time of austerity and the Greek experience was very much part of our everyday conversations at the time? So we started looked at various complementary currency movements – the Brixton Pound in the UK, the WIR bank in Switzerland, and many other examples in Europe and elsewhere. We soon started working with Bernard Lietaer, who knew this space very well and had a lot of interesting theories about the ecology of money. He was the one to author the pioneering D-CENT report on social digital currencies in 2015.

So, already in 2013, we started asking what was going to happen to all these alternative and complementary currency movements given the arrival of Bitcoin and other crypto-currencies. Marco Sachy, who was involved with the project, was doing some interesting work on Freecoin at the time (it became his dissertation eventually). So we came up with Freecoin Toolchain and had some basic pilots in various partnering countries.

Let’s talk briefly about the sequel to D-CENT, which was called Decode.

There were several big assumptions driving it. First of all, it became clear that the city became the site for democratic experimentalism. In 2015, as progressive mayors got elected in many European cities, we saw this new wave of interest in “democratic municipalism.” Suddenly, people wanted to contest this notion of a heavily financialised, for-profit city – including, when it came to technologies, the image of this very slick “smart” city, where everything was offered “as a service” by some technology vendor.

So I wrote a short paper on how to move from the smart city to the democratic city and I actually got a call from Ada Colau inviting me to implement that vision in Barcelona, as its chief technology officer. So Decode, in a way, was an effort to think through what would be involved in that transition, with citizens enjoying more rights and control over digital public infrastructures, sensors, data, and so on.

It was important to show that there were many options for what one could do with one’s data; how much of it to share, how much of it to keep private, and on what terms.

We clearly also wanted to show that there were other paradigms to think about data; for us, it was obvious that it was something collective and public and that citizens should be given an option to decide what to do with that data – including, of course, giving them the ability to share that data with the public sector so that more and better public services could be built with it. We thus started talking of “data altruism” and “data sovereignty,” which are now central concepts in the new Data Governance Act of the European Commission.

So it was important to show that there were many options for what one could do with one’s data; how much of it to share, how much of it to keep private, and on what terms. We did run some interesting pilots but even I have to acknowledge that Apple may have gotten ahead of us when they pushed this update that now requires users to explicitly decide whether they want to be tracked and how. We thought of something like this but, of course, in the context of how citizen data, once shared, could help enhance value elsewhere in the public administration.

So for us, the task was not figuring out how to give an option to store your data in a safe of some kind, so that you can later monetise it – the underlying motivation of many in the self-sovereignty identity space today – but how to create public services and infrastructures that would actually make data-sharing lead to more value being produced in the public sector as the data itself gets socialised: we called it a new social pact on data. It is a very hard task to solve, as it requires transforming the public sector as such, not just building the right standards and protocols for regulating who has access to data and how.

What was the broader political message you were intending to send with Decode? Did you succeed?

Decode posed the question of the infrastructural power of Big Tech, of their dominance of the whole stack. So we were trying to make a case for industrial policy; that we Europeans needed to take back control over critical digital infrastructures and technologies that underpin basic services and institutions of the 21st century, from healthcare, to education, to transportation, to even logistics and delivery. But, most importantly, we wanted to problematise the layer of data, because we understood the many fundamental questions between data, money, reputation, and identity.

We focused on the data layer because that seemed like an obvious way to attack the dominant business models of surveillance capitalism. We didn’t do it because we thought that controlling the data layer would exhaust the question of the infrastructural power of Big Tech.

There were, of course, failures. While we managed to scale projects like Decidim, we didn’t manage to achieve a common pan-European initiative on technological sovereignty, linking political, economic, and geopolitical dimensions in a coherent way. There’s still no coherent vision of a digital industrial policy that could liberate even half of the stack that Europe needs, not to mention its entirety. In our defense, we also had very little money; 5 million Euro – this was Decode’s budget, spread across many partners in the project – is not so much given the ambitions.

We were also too early in many respects. What we were theorising with Freecoin and alternative digital social currencies arrived a few years too early; today it’s completely inside the crypto bubble, minus the political and social concerns that we had. As a result, the elaborate institutional framework that we crafted is missing from most of these crypto projects.

So if we look at the big picture and study the choices that Europe faces right now, what are they?

Well, there’s always the option of leveraging industrial policy and investing in European technological sovereignty to rebuild the stack – much like the Chinese have done, but in line with our democratic principles and values. We would need new companies to do that because the old incumbents, including the telcos, are probably not up to the task. How do we create the companies we need? Perhaps, the venture capital market, which in Europe does attract a lot of state money, could be used for that.

This also explains why I found myself leading Italy’s Innovation Fund, a venture capital fund backed by Italy’s national promotional institution, Cassa Depositi e Prestiti (CDP), the equivalent of France’s Banque publique d'investissement (BPI) and Germany’s KfW. How do we ensure that VC investment in Europe doesn’t follow the American path and that instead of focusing on management fees and whatnot, we focus on creating value for the public while achieving Europe’s much-needed technological sovereignty? And how do we do it in such a way that European start-ups are not then bought up by Facebook or by Saudi Arabia's sovereign wealth fund?

What I find so suspicious about DAOs and tokenisation and Web3 is the idea that they want to tie every institution to the logic of the stock exchange: if things work well, the value goes up – and this creates some kind of a disciplining mechanism.

How could we deliver on the promises of decentralisation, decarbonisation, the Solidarity Economy, the new welfare state through the companies that will be created and funded? Likewise, when we think about enabling political participation – to help people fight climate change or even solve their own local problems – how do we do it in a way that goes beyond the logic of the market and doesn’t require turning everyone into an economic agent responding to financial incentives of some kind? How do we do that without financialising politics?

This sounds like you are not a big fan of hijacking financialisation to serve social and political needs, as many proponents of crypto and Web3 have been arguing…

I’m not. What I find so suspicious about DAOs and tokenisation and Web3 is the idea that they want to tie every institution to the logic of the stock exchange: if things work well, the value goes up – and this creates some kind of a disciplining mechanism. Do we really want to “optimise” our healthcare or education this way? Even when it comes to companies, we still have public companies that have a public mission, and even if they have been privatised this doesn’t somehow eliminate that mission.

Tokenisation, for me, is the latest manifestation of what we could call the super-financialisation of everything, enabled by the digitisation of physical processes and objects. Now one can attach IP rights to everything; make smart contracts out of everything; enable transactions in everything. We fought that logic early on, with Decode, when people started making arguments about data being an asset class, something that accrues to individuals, to be bought and sold. We always argued that one could also have a much more social and public take on data, and specify collective access and ownership rights; data doesn’t have to be treated as something proprietary, but as something that can create public value and redistribute wealth and rewards.

Can blockchains and crypto be of some help here? Maybe, but one would need to change the entire technological system, then. One would need to say that, instead of using blockchains to create smart contracts that enforce property rights, we want blockchains that enforce the “right to informational self-determination” or “the right to knowledge”? Or even the right to inspect the algorithms in order to assess their impact… for example, this is very relevant today when it comes to collective bargaining and platform workers’ rights in the gig economy. This would require transforming quite a lot of jurisprudence and reining in our notion of the public good and then also somehow fitting it onto the blockchain.

I am thinking of debates about digital IDs in the Global South – e.g. India – where biometric technologies are seen both as mechanisms of control but also as ways of claiming welfare benefits that many citizens couldn’t claim before. So the effects are quite ambiguous and depend on the politics of the situation.

This is exactly the problem. There is a big fight for the democratisation of the state, for the new notion of the democratic public. We either win or lose that fight. That has been my frequent response to advocates of things like “platform cooperativism.” Ultimately, these cooperatives cannot and do not live in a void. You do need the state to enact new regulatory frameworks, new ownership patterns and structures inside an economy; the state is the only tool that we have, it’s the one and only institution that we have to regulate and to create laws that would prevent big companies from usurping their dominant position and abusing their market power. So, in the end, if you want to democratise the economy, you will need the state. We need to reclaim that power, not hide away from that responsibility by invoking the power of crypto or the market or financialisation.

As someone who has worked in this space over twenty years, what do you think of the vision for emancipation that is now on offer by those advocating Web3?

Well, first of all, I don’t see how Web3 – focused as it is on the creator economy and tokenisation – would allow us to deal with questions regarding infrastructural power and the industrial policy of the future… things like broadband, 5G, data centres, cloud computing, AI, quantum computing, microchips, the next generation of batteries. It’s not just the advertising business models of Web 2.0 that should concern us. What does Web3 offer us here? Not much. The Web3 discourse accepts today’s status quo as a fact and moves on to discuss all these other aspects.

Most of the stuff about DeFi seems to me just a temporary phenomenon – the result of central banks’ inaction and delay in grasping the threats that come from leaving this industry unregulated. In this sense, China seems to be seeing through all the Web3 rhetoric and asking the right strategic questions, both in terms of controlling the whole stack, from batteries to AI, to establishing control over the FinTech sector in a way that would reduce risks to the country’s overall financial system. Europe, of course, doesn’t operate in the same political climate, so acting so resolutely about Web3 might not be an option (also for geopolitical reasons). It’s hard to imagine Chinese policymakers spending any time discussing Dogecoin.

What is one thing that the current advocates of decentralisation – at least as this idea surfaces in the crypto/Web3 space – get wrong, in your view?

What I learnt from Francesco Nachira and his interest in constructivism and theories of language and cognition is that decentralisation can never be just about decentralising infrastructures. One always needs to have a requisite strategy for decentralising institutions as well.

So that’s why we always aimed at the decentralisation of economic and political power as the necessary condition of possibility that was needed to deliver on the true emancipatory potential of decentralising digital infrastructures. When I look at the promises made by the proponents of DAOs and NFTs, they seem to believe that technology itself would somehow do the job: once we code a DAO correctly, it will ensure a new institutional form and that form would have revolutionary effects, etc. This seems to me short-sighted and also very inward-looking.

It’s not, of course, only about decentralising power. It’s also about creating new institutions to keep old power – which, by now, has taken on new forms – in check. Where are these new institutions when it comes to crypto and Web3? Everyone seems to believe that big tech platforms and Wall Street and Hollywood will just stand idle as they are being disrupted by “crypto.” Does this really sound plausible to anyone?

I’m reminded of the experiences we had in Italy in the early 2000s, when some of us – including many people in the hackerlabs – joined forces with the so-called chainworkers – those working for the likes of WalMart – who enjoyed few of the traditional workers’ rights. We did insist on giving them basic income, on building new kinds of trade unions to defend them, and so much more – but these calls were met with silence and often direct hostility from the traditional unions, who just couldn’t see that exploitation and precarity were taking on new forms, previously unknown to them. But imagine how much time we would have gained if these institutions were, indeed, formed at the time? This would have turned the struggle against Uber and Deliveroo in a completely different direction, at least in Europe…

I have one last question. For you, it’s been an interesting professional trajectory to jump from Indymedia – the world’s first online independent media website – to being on the governing board of RAI, one of Europe’s most respected public broadcasting networks. What kind of transformations can we expect from your presence there?

In a sense, I think about my mission there through the prism of my earlier experiences with Pontos de Cultura and digital ecosystems, and more generally in the context of the mission to reclaim Europe’s digital sovereignty. How can we retain and reform a public media that provides high-quality content but also informs the general public and serves a public mission? Non-commercial broadcasting networks do run the risk of disappearance and decline, because of the challenges posed on many fronts, from advertising to content production. Now they suddenly have to compete with Facebook and Google and Youtube and Netflix and Amazon…

Curating and contextualising things well is an expensive and ambitious proposition; that’s why no innovation in the crypto space is likely to replace libraries or museums – not even NFTs.

My own view on this is that the public media might actually need to learn a thing or two from Indymedia. We do need to think about using public media to aggregate high-quality content and curate it; they would need to leverage open APIs and find a way to attract content produced by all sorts of players that are currently outside of its ecosystem, be it newsletters or festivals or newspapers, podcasts or some completely new format. Public broadcast media can bring massive visibility to such sources and they can also contextualise them much better, given the huge archives that they have. Perhaps, we’ll see that there’s a way to do curation and recommendation wisely – using data and ethical algorithms in the public interest. This is where public media may actually do a much better, more responsible job than whatever one might expect from Netflix’s algorithms…

But do we even need any of this given that Web3 and crypto would soon usher in the “creator economy” of independent content producers and curators?

I am convinced that we do. Curating and contextualising things well is an expensive and ambitious proposition; that’s why no innovation in the crypto space is likely to replace libraries or museums – not even NFTs. Culture and media, done well, are expensive undertakings, and if we expect them to keep playing an important role in our democratic system, we have to be prepared to pay for them as public goods – and not as something funded solely through advertising or data collection or whatever else the latest model might be.

| stay informed | stay connected


Sign up to our set of newsletters – a wealth of insight and guidance in a world in turmoil, from our fellows and our networks beyond.


We use cookies to measure how often our site is visited and how it is used. You can withdraw your consent at any time with effect for the future. For further information, please refer to our privacy policy.