I don’t seem to write blog entries very often

Unless it’s something long and rambling, I’m not sure it’s worth it… even though ‘long and rambling’ is a writing style I’m quite accustomed to.

For now, I’m not blogging here, nor posting on Facebook as I used to. I seem to be using Twitter instead, as m_j_chalmers. If you’re interested, jump over there…

Being an AE, being seen to be an AE, and maybe regretting being an AE

So, it seems that I am an associate editor for J.CSCW. I had and have my doubts, as it’s a Springer journal, but now that a paper just appeared in my inbox from Kjeld Schmidt, I have to actually do something.

Finding reviewers is not so easy nowadays. Few people consider reviewing a priority, I feel. I don’t look forward to the phase of sending a million semi-random emails to folk vaguely connected with the topic. Also, as with most days, I am reminded about how little I know about so many topics. This paper makes this clear to me again. Fair enough, really.

I’m also, allegedly, an associate editor for the journal, Information Visualization. I was an AE, a decade or so ago, but I’d not had any communication for literally years. My name was not on the journal’s web site, and I basically thought they’d just quietly pushed me off the AE list. This would be fair enough, as I basically don’t work in this area any more.

And then, a few months ago, a paper arrived in my email, so I had to actually do something. I have now realised why I got this paper, after trying and failing to get reviewers, for months. No-one actually wants to review it. People run away and hide, they pretend to have leprosy and so can’t type, they declare that they are merely the namesake of someone else who works on the paper’s topic.

So now I’m beginning to wonder whether it’s good for it to be published. Maybe if it’s so unpopular now, it should just be quietly shuffled off the list too.

If I ever get it reviewed, I’ll resign as AE for that journal. On the other hand, if I fail to get reviewers for it, what will they do? Maybe they’ll take my name of the journal’s web site, not send me any emails, and… quietly push me off the AE list. But then… how will I know the difference from when I was on the list?

AI/CS considered intellectually lame

I’ve been reading some old and odd papers from a way back. It may be because of the Easter break, but I’m meandering even more than usual. Here are two snippets, that reaffirm my belief that my own subject, computer science, is the most lame and benighted of the sciences.

The first is from a paper by Luc Steels, which some AI folks might know of for work such as his (amazing) Talking Heads experiment. In a paper last year, Personal Dynamic Memories are Necessary to Deal with Meaning and Understanding in Human-Centric AI, he set out these levels of construction of meaning of an experience (drawn from an art history text, interestingly enough):

  • The base level of an experience details the external formal properties directly derivable from the perceived appearance of the experience, for example, the lines, shapes, color differences in hue, value (brightness) and saturation, textures, shading, spatial positions of elements, etc. in the case of images.
  • The first level of meaning is that of factual meaning. It identifies and categorises events, actors, entities and roles they play in events, as well as the temporal, spatial and causal relations between them. In the case of images they require a suite of sophisticated processing steps, starting from object segmentation, object location, object recognition, 3D reconstruction, tracking over time, etc.
  • When there are actors involved, a second level, that of expressional meaning becomes relevant. It identifies the intentions, goals, interests, and motivations of the actors and their psychological states or the manner in which they carry out actions.
  • The next level is that of social meaning. It is about the social relations between the actors and how the activities are integrated into the local community or the society as a whole.
  • The fourth level is that of conventional meaning, based on figuring out what is depicted or spoken about and the historical or cultural context, which has to be learned from conversations or cultural artefacts, like books or films.
  • The fifth level is known as the intrinsic meaning or content of an experience. It is about the ultimate motive of certain images or texts, or why somebody is carrying out a certain behavior. It explains why this particular experience may have occurred.

Yeah. You know how this goes. How far has CS/AI ever got? How far is it likely to, given its positivist approach?

The second paper was one I skimmed years ago, but came back to the other day. It’s Hubert Dreyfus’ Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian, and it strikes me yet again how almost all of CS/AI is unremittingly and naively positivist.

Most of what Dreyfus pointed out as problematic in 2007 is still at the core of our field. The massive gap between what philosophers, biologists and neuroscientists know about intelligence, ontology, epistemology, and activity… and what computer scientists do… is saddening.

And, yeah, I know: even though it’s clear that our current approaches are amazingly limited, one can do amazing things with positivist-based tech. Also, what’s the alternative? That’s hard. So, in the meantime… on we go, pretending to ourselves—and selling to others—that AI/ML might be like human intelligence one day, even though we are building it not to be.

There’s gold in them there shills. Sigh…

The oddest research project, funded almost as it ends

I am lucky enough to work with Richard Mortier and colleagues at U. Cambridge, and Ewa Luger and colleagues at U. Edinburgh, on a new project funded by the Centre for Digital Built Britain. It also has a link to or partnership Google Research’s Oak project, on secure enclaves and comms.

Cambridge is setting up environmental-ish sensors in their building, and in others nearby. Edinburgh is assessing potential ethical and legal problems with the data collection. Glasgow is making a game that seamfully plays with the inferences about individuals one can make on the basis of public, anonymous data.

Glasgow’s part is all happening in a mad rush, though. It was planned to start in September, but the admin and ethical approval processes in Cambridge meant that we didn’t get the award confirmed well into this year… and then Glasgow’s admin folk actually lost the paperwork for a month. It officially started here a few weeks ago, and it all has to finish by the end of April. I had an RA lined up, as I had funding for five months’ RA time, but in the end he was only going to get a contract for a month. He found something better to do, quite sensibly.

I’ve found another way to get this thing made, via the fine folk in the Glasgow University Software Service, but… jeez.

HDI’s Resistance theme: four projects funded

It’s been too long since I wrote something here, so let me put up a short list of the four fine projects that we’ve funded in the Surveillance and Resistance theme of the Human Data Interaction NetworkPlus.

  • Countermeasures: Giving children better control over how they’re observed by digital sensors — Angus Main, RCA.
  • Collaborative ResistancE to Web Surveillance (CREWS) — Steven Murdoch, UCL.
  • Resisting Surveillance in Connected Cars — Anna Rudnicka (UCL)
  • Privacy Awareness for Active Resistance (PAAR) — Andrea Cavallaro (QMUL)

I am slightly envious of the people in these projects, as it seems interesting and timely to be asking: How to design for resistance against data surveillance? Human Data Interaction offers a conceptual framework for system design that to goes beyond notions of data/algorithmic transparency, to focus on helping people understand what is happening to their personal data in data-intensive systems (legibility), to change those systems to be in better accord with their wishes (agency), and to work with the other people using this data, so as to improve that processing (negotiability). Work so far in HDI generally assumes that at least one of these three features can be implemented effectively, to meet people’s needs and desires. However, what should we do when legibility, agency and negotiability are not enough?  

More details, as and when.

HDI funding call on Surveillance and Resistance

My new theme within the HDI NetworkPlus has a funding call active, closing on October 1. You can find out more about it, and get an application form for project funding, here.

How to design for resistance against data surveillance? Human Data Interaction offers a conceptual framework for system design that to goes beyond notions of data/algorithmic transparency, to focus on helping people understand what is happening to data about them (legibility), to change relevant systems to be in better accord with their wishes (agency), and to work with the people using the data so as to improve that processing (negotiability). Work so far in HDI generally assumes that at least one of these three features can be implemented effectively, to meet people’s needs and desires.

However, what happens when one has some understanding of what is happening with one’s data, but cannot change the system or work positively with the people processing it? Then, the collection of one’s personal data becomes something more akin to surveillance — in ways that are often driven by contemporary business models, as per the widely discussed issue of ’surveillance capitalism’. This invites the question that is at the core of this theme: what should we do when legibility, agency and negotiability are not enough?  It seems that we could then support systems and processes that allow people to resist or subvert such abusive, illegal or otherwise undesired surveillance — in ways that are not in themselves abusive or illegal. In this way, we can be build on the existing three tenets of HDI, and make resistance be a fourth part of our conceptual framework.

In this call, we are looking for proposals that directly address this question in a practical and demonstrable way, by developing technical solutions, provocations or experimental explorations. This can include (but is not limited to) software development, prototype evaluation, interface design, and other similar arts-based responses.

The funding: We hope to fund around 5 projects (1 x 50k, 1 x 10k and 3 x 2.5k), though this will be determined by the quality of the proposals. All projects must end by Sept 2021.

On Vardi’s ‘To Serve Humanity’ CACM article

There are two main problems in this interesting article (and the Vienna Manifesto), I think, one stemming from a type error and one from an overly broad abstraction.

The first is the treatment of technology as separate from its design/use by humans. It’s not really viable, I suggest, to say that “technology sometimes seems to ‘eat’ humanity”. People use technology to ‘eat’ others, as part of politics, business, and so forth. 

I (of course) agree with his point that (most of) our discipline has not accepted the ‘dual nature of technology’… although there are corners of the discipline for which study and avoidance of negative aspects of computational technology are the subject of everyday work. Why isn’t it alluded to, or referenced?

The second is the overly broad treatment of ‘people’ and ‘humanity’. Vardi treats these terms as if they apply to a homogeneous group of individuals, and ignores something that stems from the previous point: for some people, using technology against other people *is* what they do as humans.

Obviously, but sadly, dominating and exploiting other people has been part of humanity ever since humanity began. It seems pointless to say here that we should put humanity at the centre of our work, if we are to improve the situation… when part of humanity is the problem. There are people fighting already against them… but I don’t see much in this article about what they have done or are doing… 

Also, yes, there is a significant part of humanity within computer science who have similar ill effects unknowingly or naively. I wonder if this is the part that Vardi really wants to influence… or has a chance in hell of influencing…

So… It seems odd (to me) of Vardi to write “Yet the participants [of the Vienna workshop] were convinced it is possible to influence the future of science and technology and, in consequence, society.” Well, yes, sure. The future of science and technology is continually being influenced by changing societal norms, educational practices, legal regulation, etc. It seems quite bizarre to say (as the Vienna Manifesto puts it) “We encourage our academic communities, as well as industrial leaders, politicians, policy makers, and professional societies all around the globe, to actively participate in policy formation“ when there are already academics, industrialists and so forth engaged in lobbying and other forms of policy formation. What matters is who among these people has the political will and financial clout to influence them in a way that advance the values he wants to advance, and (more importantly?) to overcome those with the political will and financial clout to push things on as they are… or worse.

I do agree with the motivating sentiment of what Vardi and the manifesto say here. I just wish it was better expressed, and dealt more realistically with the momentum and power of the processes already known to be active in stopping what Vardi wants to happen. Some of the manifesto points are great, but some just seem to be woefully simplistic — missing points raised by some of our most recent major technologically-driven problems, or excessively bound up in Western-centric politics, or not seeming to really handle the complexity of fundamental issues (e.g. ‘fairness’).   

I am therefore sad to say that I think that this manifesto, like a lot of other similar moral and ethical frameworks created in and around computer science, will have zero or minimal effect. I’d be very, very happy to be proved wrong, though. 

HDI funding call on ‘beyond smart cities’ is out

A few weeks ago, the Human Data Interaction NetworkPlus held a workshop on ‘beyond smart cities’ in Edinburgh. Based on that, we’re now putting out a call for projects that we can fund (up to £50K) within this theme. August 2nd is the deadline, and the web page for it is here:
https://hdi-network.org/beyond-smart-cities/

As the call says, we particularly welcome proposals that address topics such as (but not limited to) the following, from the perspective of HDI:

  • Redesigning councils around data tech: too often, procurement processes are the bane of smart cities work, so: how can cities work in better ways? How can a city deflect the commercial push towards large-scale systems that are ‘canned’ generic products, rather than systems designed with and for it?
  • Individual versus the Collective: How to deal with the way that, in smart city systems, the benefits to one person may mean costs or losses to others? Similarly, benefits to one city area may negatively affect other areas, or affect rural regions.  How to design smart city systems in ways that take account of such inequalities and interdependencies?
  • The encouragement or imposition of behaviours: Many smart city designs imply or demand behavioural changes among citizens, but who defines these, and how? How to handle the surveillance and governance issues stemming from this ‘push’ by cities upon citizens?
  • How to trust data and services? Several of our workshop participants discussed variants of a ‘citizen science’ approach to trust, in which processes of data collection, measurement and evaluation are in the hands of citizens, so that they can act in a bottom-up way to feed into the processes of urban change. How can such citizen-led approaches create utile evidence for decision-making?

Fingers crossed we’ll get some good proposals!

‘Interpreting Computational Models of Interactive Software Usage’ accepted to CHI workshop on Computational Modeling in HCI

Good news from Oana Andrei, lead author on this workshop paper based on our Populations project from way back. The paper, outlining some of the formal methods developed for analysis of app usage logs, will be presented at the workshop at ACM CHI in Glasgow in a few months. Here’s the abstract:

Evaluation of how users actually interact with interactive software is challenging because users’ behaviours can be highly heterogeneous and even unexpected. Probabilistic, computational models inferred from low-level logged events offer a higher-level representation from which we can gain insight. Automatic inference of such models is key when dealing with large sets of log data, however interpreting these models requires significant human effort. We propose new temporal analytics to model and analyse logged interactions, based on learning admixture Markov models and interpreting them using probabilistic temporal logic properties and model checking. Our purpose is to discover, interpret, and communicate meaningful patterns of usage in the context of redesign. We illustrate by application to logged data from a deployed personal productivity iOS application.

I’ll add the PDF to Publications soon…

Workers by Self-Design: Digital Literacies and Women’s Changing Roles in Unstable Environments — first workshop and public event

This GCRF-funded network will soon have its first workshop, in Glasgow, and within it is a public event. 

As the public event’s EventBrite invitation page says, “The first public event of the network is scheduled for March 12th, 2019 where we invite participants to attend a series of short presentations on the topic, as well as to share their knowledge, experiences and challenges. If you are interested in women’s roles and experiences with non-traditional work pathways, digital literacies and innovative entrepreneurial solutions, join us!”