HDI funding call on Surveillance and Resistance

My new theme within the HDI NetworkPlus has a funding call active, closing on October 1. You can find out more about it, and get an application form for project funding, here.

How to design for resistance against data surveillance? Human Data Interaction offers a conceptual framework for system design that to goes beyond notions of data/algorithmic transparency, to focus on helping people understand what is happening to data about them (legibility), to change relevant systems to be in better accord with their wishes (agency), and to work with the people using the data so as to improve that processing (negotiability). Work so far in HDI generally assumes that at least one of these three features can be implemented effectively, to meet people’s needs and desires.

However, what happens when one has some understanding of what is happening with one’s data, but cannot change the system or work positively with the people processing it? Then, the collection of one’s personal data becomes something more akin to surveillance — in ways that are often driven by contemporary business models, as per the widely discussed issue of ’surveillance capitalism’. This invites the question that is at the core of this theme: what should we do when legibility, agency and negotiability are not enough?  It seems that we could then support systems and processes that allow people to resist or subvert such abusive, illegal or otherwise undesired surveillance — in ways that are not in themselves abusive or illegal. In this way, we can be build on the existing three tenets of HDI, and make resistance be a fourth part of our conceptual framework.

In this call, we are looking for proposals that directly address this question in a practical and demonstrable way, by developing technical solutions, provocations or experimental explorations. This can include (but is not limited to) software development, prototype evaluation, interface design, and other similar arts-based responses.

The funding: We hope to fund around 5 projects (1 x 50k, 1 x 10k and 3 x 2.5k), though this will be determined by the quality of the proposals. All projects must end by Sept 2021.

On Vardi’s ‘To Serve Humanity’ CACM article

Although Vardi has done many fine things, I don’t think this is among his best. There are two main problems in the article (and the Vienna Manifesto), I think, one stemming from a type error and one from an overly broad abstraction.

The first is the treatment of technology as separate from its design/use by humans. It’s not really viable, I suggest, to say that “technology sometimes seems to ‘eat’ humanity”. People use technology to ‘eat’ others, as part of politics, business, and so forth. 

I (of course) agree with his point that (most of) our discipline has not accepted the ‘dual nature of technology’… although there are corners of the discipline for which study and avoidance of negative aspects of computational technology are the subject of everyday work. Why isn’t it alluded to, or referenced?

The second is the overly broad treatment of ‘people’ and ‘humanity’. Vardi treats these terms as if they apply to a homogeneous group of individuals, and ignores something that stems from the previous point: for some people, using technology against other people *is* what they do as humans.

Obviously, but sadly, dominating and exploiting other people has been part of humanity ever since humanity began. It seems pointless to say here that we should put humanity at the centre of our work, if we are to improve the situation… when part of humanity is the problem. There are people fighting already against them… but I don’t see much in this article about what they have done or are doing… 

Also, yes, there is a significant part of humanity within computer science who have similar ill effects unknowingly or naively. I wonder if this is the part that Vardi really wants to influence… or has a chance in hell of influencing…

So… It seems naive (to me) of Vardi to write “Yet the participants [of the Vienna workshop] were convinced it is possible to influence the future of science and technology and, in consequence, society.” Well, yes, sure. Why did he think otherwise? The future of science and technology is continually being influenced by changing societal norms, educational practices, legal regulation, etc. It seems quite bizarre to say (as the Vienna Manifesto puts it) “We encourage our academic communities, as well as industrial leaders, politicians, policy makers, and professional societies all around the globe, to actively participate in policy formation“ when there are already academics, industrialists and so forth engaged in lobbying and other forms of policy formation. What matters is who among these people has the political will and financial clout to influence them in a way that advance the values he wants to advance, and (more importantly?) to overcome those with the political will and financial clout to push things on as they are… or worse.

I do agree with the motivating sentiment of what Vardi and the manifesto say here. I just wish it was better expressed, and dealt more realistically with the momentum and power of the processes already known to be active in stopping what Vardi wants to happen. Some of the manifesto points are great, but some just seem to be woefully simplistic — missing points raised by some of our most recent major technologically-driven problems, or excessively bound up in Western-centric politics, or not seeming to really handle the complexity of fundamental issues (e.g. ‘fairness’).   

I am therefore sad to say that I think that this manifesto, like a lot of other similar moral and ethical frameworks created in and around computer science, will have zero or minimal effect. I’d be very, very happy to be proved wrong, though. 

HDI funding call on ‘beyond smart cities’ is out

A few weeks ago, the Human Data Interaction NetworkPlus held a workshop on ‘beyond smart cities’ in Edinburgh. Based on that, we’re now putting out a call for projects that we can fund (up to £50K) within this theme. August 2nd is the deadline, and the web page for it is here:

As the call says, we particularly welcome proposals that address topics such as (but not limited to) the following, from the perspective of HDI:

  • Redesigning councils around data tech: too often, procurement processes are the bane of smart cities work, so: how can cities work in better ways? How can a city deflect the commercial push towards large-scale systems that are ‘canned’ generic products, rather than systems designed with and for it?
  • Individual versus the Collective: How to deal with the way that, in smart city systems, the benefits to one person may mean costs or losses to others? Similarly, benefits to one city area may negatively affect other areas, or affect rural regions.  How to design smart city systems in ways that take account of such inequalities and interdependencies?
  • The encouragement or imposition of behaviours: Many smart city designs imply or demand behavioural changes among citizens, but who defines these, and how? How to handle the surveillance and governance issues stemming from this ‘push’ by cities upon citizens?
  • How to trust data and services? Several of our workshop participants discussed variants of a ‘citizen science’ approach to trust, in which processes of data collection, measurement and evaluation are in the hands of citizens, so that they can act in a bottom-up way to feed into the processes of urban change. How can such citizen-led approaches create utile evidence for decision-making?

Fingers crossed we’ll get some good proposals!

‘Interpreting Computational Models of Interactive Software Usage’ accepted to CHI workshop on Computational Modeling in HCI

Good news from Oana Andrei, lead author on this workshop paper based on our Populations project from way back. The paper, outlining some of the formal methods developed for analysis of app usage logs, will be presented at the workshop at ACM CHI in Glasgow in a few months. Here’s the abstract:

Evaluation of how users actually interact with interactive software is challenging because users’ behaviours can be highly heterogeneous and even unexpected. Probabilistic, computational models inferred from low-level logged events offer a higher-level representation from which we can gain insight. Automatic inference of such models is key when dealing with large sets of log data, however interpreting these models requires significant human effort. We propose new temporal analytics to model and analyse logged interactions, based on learning admixture Markov models and interpreting them using probabilistic temporal logic properties and model checking. Our purpose is to discover, interpret, and communicate meaningful patterns of usage in the context of redesign. We illustrate by application to logged data from a deployed personal productivity iOS application.

I’ll add the PDF to Publications soon…

Workers by Self-Design: Digital Literacies and Women’s Changing Roles in Unstable Environments — first workshop and public event

This GCRF-funded network will soon have its first workshop, in Glasgow, and within it is a public event. 

As the public event’s EventBrite invitation page says, “The first public event of the network is scheduled for March 12th, 2019 where we invite participants to attend a series of short presentations on the topic, as well as to share their knowledge, experiences and challenges. If you are interested in women’s roles and experiences with non-traditional work pathways, digital literacies and innovative entrepreneurial solutions, join us!”

Workers by Self-Design: Digital Literacies and Women’s Changing Roles in Unstable Environments

In a welcome turn of events, some additional Global Challenges Research Fund budget was made available… and this allowed the above-named project proposal to be funded. It’s a £29K network style grant: 

The proposed meetings aim to strengthen and develop new partnerships among academic and non-academic partners who will work to understand, explore and create an impact-oriented research agenda on women’s engagement with digital literacies and their changing roles as they transition into the workplace in unstable environments in UK, the Philippines and Iran. 

The leader is Lavinia Hirsu, in the School of Education at U. Glasgow, along with Dr.Katarzyna Borkowska, in the School of Interdisciplinary Studies. Academic Partners are Dr. Zenaida Reyes, Professor and Director of Linkages and International Office, Philippine Normal University, and Lamiah Hashemi, Senior Administrative Officer, University of Kurdistan Technology Incubator, Iran.

I will try to help with tech-centred issues, as I can. I rather hope that there might be a way to connect it to the issues and people in the HDI network, which has a theme on skills and education…


I’m very glad to now be an associate editor for the ACM Transactions on Computer-Human Interaction. I was just in the door, and I already have a paper to organise reviewers for… and, yes, I’m struggling already to get people signed up for the expected date. Conforming to an old norm?

Improving the t-SNE data visualisation algorithm via stochastic sampling

Another project started very recently, with the above title. It’s great to be working with Alistair Morrison again, as we look at ways to apply the ideas we developed (many years ago!) for fast spring models (a.k.a. force-directed placement) with tSNE algorithms. Somewhat bizarrely, this is funded by the US Navy. We are grateful but also somewhat surprised.

Ethical Design of Apps for Assessing Mental Health

This is a EPSRC Impact Acceleration Award project getting off the ground now, on phone apps that collect personal data that can be used to predict social anxiety.

I am enjoying working with Angus Ferguson and Marek Bell on this short (4 month) project. It is based on an undergraduate project by Dimitris Eleftheriou, that basically reproduced an experiment by Boukhcheba et al. reported at the 2017 Ubicomp workshop on mental health.  We will extend Dimitris’ work that let the app operate in a way that allows most/all of the personal data (collected via Denzil Ferreira’s AWARE framework) to be kept on the phone, by adding and evaluating features that allow participants to maintain feedback and control over what happens to the data sent from their phone to us.

Even as I write this, it reminds me of old Ubicomp work from the 1990s, by Victoria Bellotti and Abi Sellen, on feedback and control… We’ll see whether/how that might weave into this work…