Contact-tracing and surveillance
Published by marco on
Contact tracing, or just “tracing”, is a way of determining who’s been infected with a contagious disease in a community. Compared to self-isolation en-masse, it’s a finer instrument: instead of everyone staying away from each other, properly trained workers trace the path of the disease, using this information to isolate the ill from the still-healthy.
How it works
When someone tests positive for a contagious disease, tracers interview them to find out where they’ve been and who they’ve “contacted” since they most likely became contagious. This period is estimated by what we know of the disease in question and the stage of infection they’re in, based on the symptoms.
For example, assume that we think that an infected person is asymptomatic for up to a week before they exhibit symptoms. Assume someone exhibited symptoms for two days and self-isolated before going to the hospital or calling a doctor. The contact-tracing team needs to find out who that person contacted for the week prior to showing symptoms. Two days can presumably be accounted for as self-isolated—except for cohabitants, who must be traced next.
They ask questions, get names and then find those people, run tests on them and fan out their interviews from there, until they can confirm that they’ve hit a dead-end on finding people who test positive.
And here’s the key: once you’ve found people who are positive, you have to isolate them to prevent them from spreading the disease further. Tracing is nearly useless if you don’t isolate based on your results. If you don’t, you just end up starting all over with tracing—back to square one.
Tracing is established epidemiological groundwork and it is very manpower-intensive. There is nothing magical about it. It’s not foolproof, but if done assiduously, it’s pretty damned good. It’s how China and Korea stamped out their first wave of Covid-19. Also because they paired tracing with isolation. Just tracing means that you’re always behind the further spread and are observing the trail rather than getting ahead of it and putting in a firebreak.
“South Korea, the much-hailed model, locked down two cities, currently takes everyone’s temperature in public spaces, monitors every person’s movement through cell phone and television data, and uses government and public surveillance to keep tabs on any individual suspected of carrying coronavirus so it can enforce self-isolation. Taiwan is the same. Locals report getting a knock on the door from the police a half hour after their cell phones died because their movements could no longer be tracked.”
The siren call of technology
We hate the sound of “manpower-heavy” because that sounds like a lot of organization and UGH… a lot of work, so we want technology to solve this for us, just like it’s so wonderfully solved everything else.
China and Korea did benefit from technology in that they have a pretty widespread network of social-app data to which the government has access. In addition to the interview—and they did the interview parts too—they were able to use location and contact data to establish connections that might have otherwise gone undiscovered.
But it works without apps, as well, just to be clear. The cartoon to the right (obtained from Contact Tracing) explains how such a hypothetical app might work to fulfill the role historically played by people.
Will it be voluntary?
One argument for this type of technology is that most people have already willingly given up their data for the dubious benefit of partaking in various social-networking endeavors, so they shouldn’t be too troubled by an app that will literally save lives.
This isn’t a terrible point to make, but we should think about a few facets of the argument that are only implied. Social-networking apps have, among others, the following characteristics:
- Participation is voluntary.
- It is also, in theory, revocable.
- It is possible to participate anonymously (or at least semi-anonymously).
- One can have multiple identities
Will installing, granting broad permissions to, and keeping a life-saving app on at all times also be voluntary?
Can app-based contact-tracing even work if participation is voluntary? No, it cannot. It will not work at all without proper, boots-on-the-ground contact-tracing and without a high level of participation. Otherwise, you can just save yourself the trouble of developing it.
Which takes us to the next section.
What about data-privacy laws?
If you read the cartoon above, you have an idea of how the app might work. It sounds doable but, putting aside the basic feasibility (for now), how would such an app integrate into a society with data-protection laws?
- Will the data be untraceable, as the cartoon hopes?
- Which government agencies get access to the tracking data?
- When is it deleted? Two weeks? A month? After a year? Never?
- Are we issued a device or are we expected to mix the government surveillance app on our private devices?
Those are just the policy questions. Let’s assume, for now, that whoever issues the app can provide enough guarantees about anonymity and data-privacy to satisfy local laws.
What does the workflow look like?
Given that the app can detect and notify contacts, how would the whole “reporting and responding” part work?
- Do you trust a potential contact from an app enough to self-quarantine?
- Or do you have to report for a test?
- Do we trust the test enough to self-isolate?
- What about a negative result? Does that mean you’re in the clear?
- If you can trust neither a positive nor a negative result, then what’s the point?
- Is it only for the authorities to ping you to report for testing because they think you’ve contacted someone?
- Do you get a message if you rode on the same train car as someone who tested positive?
- Or does the app only work for when you’ve tested positive, to allow the authorities to track your contacts afterwards?
- Do the police show up to enforce isolation?
It’s already not so easy to strike a balance between public safety and civil rights during a pandemic. The important parts involve solidarity and trust in the government. It’s not clear to me how an app would help.
Leverage existing infrastructure
Why do we even need a new app to surveil ourselves? Aren’t we already being watched six ways from Sunday as it is?
There are two surveillance networks. One is the global U.S. surveillance system exposed by Edward Snowden. Snowden’s revelations led to nearly no change in behavior. The program was certainly not dismantled. It still exists and it has likely been extended since then.
We will almost certainly not make use of this network to track people because of … drum roll … national-security reasons. The U.S. will go to its grave with those words on its lips.
When people claim that the citizenry has already opted for pervasive surveillance, they are referring to the private networks set up by Google and Facebook and Apple, who are basically able to track most people’s every move. They did it to sell ads, but now it’s time to use it for good. I’m a bit too cynical to think that this could possibly work as, for example, the article We Need A Massive Surveillance Program by Maciej Cegłowski (Idle Words) does.
He argues (as above) that, for better or worse, we already have a surveillance system in place and it would be “shameful” not to use it for this glorious cause.
“Of course, the worst people are in power right now, and the chances of them putting such a program through in any acceptable form are low. But it’s 2020. Weirder things have happened. The alternative is to keep this surveillance infrastructure in place to sell soap and political ads, but refuse to bring it to bear in a situation where it can save millions of lives. That would be a shameful, disgraceful legacy indeed.
“I continue to believe that living in a surveillance society is incompatible in the long term with liberty. But a prerequisite of liberty is physical safety. If temporarily conscripting surveillance capitalism as a public health measure offers us a way out of this crisis, then we should take it, and make full use of it. At the same time, we should reflect on why such a powerful surveillance tool was instantly at hand in this crisis, and what its continuing existence means for our long-term future as a free people.”
So his message is pretty mixed, reluctantly coming down on the side of “let’s use this for good, but also think about how bad it is and shitcan it the moment we don’t need it anymore.”
This is where the argument bogs down: measures of this kind have historically lingered pretty much forever. The Espionage Act of 1917—originally meant to keep a boot on the necks of those upstart Bolsheviks with their dreams of socializing the world—is still on the books and has been used more than ever in the 21st century.
The Patriot Act is renewed like a Readers Digest subscription—unthinkingly and, for nearly 20 years now, largely unread. Echelon is still in place. The whole NSA network is in place. Google and Facebook gather scads of data on us, despite assurances to the contrary.
This happens all the time
The article We’re not going back to normal by Gideon Lichfield (MIT Technology Review) likens the coming surveillance and tracking measures to the concessions we made in the last 20 years in order to be able to fly.
“There would be temperature scanners everywhere, and your workplace might demand you wear a monitor that tracks your temperature or other vital signs. Where nightclubs ask for proof of age, in future they might ask for proof of immunity—an identity card or some kind of digital verification via your phone, showing you’ve already recovered from or been vaccinated against the latest virus strains. We’ll adapt to and accept such measures, much as we’ve adapted to increasingly stringent airport security screenings in the wake of terrorist attacks. The intrusive surveillance will be considered a small price to pay for the basic freedom to be with other people. (Emphasis added.)”
The analogy to flying is appealing at first. The surveillance and security-theater hassles associated with flying were optional, though. What we’re talking about is a significantly changed society. It might not be avoidable in that too many people will suffer and die if we don’t restructure, but we should at least think about each step instead of just listening to the loudest, most-panicked voices. We should at least have some reassurance that the measures being promoted will have the desired effect. What’s the point of giving up a freedom for nothing in return?
The article Protecting Civil Liberties During a Public Health Crisis by Matthew Guariglia And Adam Schwartz (EFF) asks some of the right questions.
“But we must not lose sight of the great sensitivity of the personal data at issue–this data paints a clear picture of the travel, health, and personal relationships of airline passengers. EFF would like the CDC to explain what it will do to ensure this sensitive data is used only to contain communicable diseases. For example, what measures will ensure this data is purged when no longer helpful to contact tracing? Also, what safeguards will ensure this newly collected data is not used by police for ordinary crime fighting, or by ICE for immigration enforcement?”
These are good questions that should have good answers before we proceed with anything invasive.
The article Protecting Openness, Security, and Civil Liberties by Cindy Cohn (EFF) starts by conceding that we’ve already upended our lives in many ways—but should still be careful with every additional concession.
“We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, “necessary and proportionate” to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.”
Finally, the article Privacy vs. Surveillance in the Age of COVID-19 by Bruce Schneier also encourages using every weapon we have to prevent needless death and suffering, but to recall invasive tools the minute we don’t need them anymore.
“I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.”
Accept necessary but temporary measures
Just because we’re in “hammer” mode to buy time right now doesn’t mean we have to continue to act in panic. We’ve bought ourselves some time to think. We should use it wisely.
We can only hope that we get answers and assurances that the efficacy of contact-tracing measures will be continually evaluated. There have to be concrete “pull the plug” clauses in any laws that go into effect. These clauses should apply both to addressing situations where measures are being abused and also to “sunsetting” measures that are no longer needed.
If the involved parties fail to comply, there must be punishment.
Just kidding: obviously there won’t be.
They’ll write shitty apps that steal all of your data and do terrible, stupid things with it. The more disadvantaged your various identity groups, the worse it will be. The usual suspects will come out on top with all of the money.
Just as Schneier can’t possibly believe that his pleas won’t fall on deaf ears, I’m also deeply skeptical that our current civilization can do this. Still, I was surprised to see how much of the world enacted these isolation measures, so let’s wait and see.
Even intelligent and clear-thinking grudging proponents of using surveillance (like Cegłowski above) spend ¾ of their time hedging about misuse. History shows that power will be misused and that the laws granting it will never be repealed voluntarily.
Snowden and Greenwald on Surveillance
For more on the existing surveillance systems, the following interviews and discussions with Edward Snowden are quite illuminating:
I found the second interview quite good and have included a partial transcript of the parts I found most relevant.
“Edward Snowden: Presumably—and this was the presumption of the question put to me before—is the idea that this is a choice between mass surveillance or the just completely uncontrolled spread of an infectious virus that can cause serious disease. And I don’t think that’s accurate. In fact, I know that’s [not] accurate. I know a little something about how surveillance works here.
“What we are being asked is to accept an involuntary mass surveillance in a way that has never been done before at this scale, in the context of a real crisis. They just go: ‘Look, we’re going to do this, the data already exists, the phone companies … we’re going to apply it to sort of a new use case. We’re going to this surveillance infrastructure that already exists, or rather we’re going to take this communications infrastructure that was not designed for surveillance—or, rather, it was told to us that it would not be used for surveillance—and we’re going to use it for precisely that, but for a really good reason.‘
“Now, they say that this is necessary, they say that there is no alternative, they say that if you want to save lives, you’re going to have to do this. But that’s not true. The question here is between the involuntary surveillance of everyone that has been carrying a phone over the last however-many weeks or months or years that they want to look back to. Because, remember these record of your movements, at least by AT&T and Verizon in the United States, are reputed to go back to 2008. Everywhere your phone has traveled since 2008, they know.
“There are no laws regulating how long they can retain this information, in large part in the United States.
“Now, imagine an alternative: you go to the hospital, you are diagnosed with an infection and the doctor goes: ‘it would be really helpful for you to be able to voluntarily share the movements of your phone.’ So you go in with your app and you show them: ‘I was sitting next to a guy who I don’t know who they are, but you said they were infected.‘ You now get priority access to this kind of testing, you can get priority access to treatment because it is clear that you have potentially been exposed.
“And none of this requires privacy sacrifices, none of this requires any sort of involuntary or intrusive violation of rights. And the funny thing is, these capabilities are not difficult to create. This platform could have slapped together in four days by a bunch of university researchers working together, if they had had the kind of funding, mandate and support.
“Glenn Greenwald: A lot of your answers are predicated on the desirability not of government coercion but of voluntary conduct. That is not only in the individual’s enlightened self-interest, but in the interest of society, which, in turn, means that there is a flow of information that is accurate and reliable and trustworthy, that people put their faith and confidence in, as kind of reliable font of authority to form their understanding of how the pandemic functions.
“And, maybe, I’m not sure, but I suspect it’s the case that there are countries in which there is faith in some kind of centralized authority, whether it’s scientists with government or media outlets that they trust, to get this information and it can be effective. But, in other countries, and certainly in the United States and it’s true here in Brazil and it’s definitely true throughout Western Europe, there’s a collapse of trust in these institutions of authority, where people aren’t sure anymore what to believe.”
It’s not “true throughout Western Europe”. Perhaps in France or Italy, but in Germany and Switzerland, at least, and most likely also Scandinavia, there is considerable support and trust in the government that they are doing the right thing.
If we do concede liberties to a government that we don’t even trust, it should at least be for something that actually works for fighting the disease. Can an app even do what people think that it can? We’ve already seen that the app would be just an aid to other, more traditional contact-tracing measures.
The Technology is not there
When many people hear “app”, they probably just think they’ll install it, make sure Apple’s “Health” app is running, grant all permissions to everything, turn on Bluetooth, turn on location services, and then they’ll be saved from Covid-19. No wonder they can’t wait to get their hands on it.
It’s not going to be like that, at least not for a long while. There are many technical, technological, and process hurdles to jump first.
I’ll start with another security-specialist article, Me on COVID-19 Contact Tracing Apps by Bruce Schneier, which reflects my own misgivings on apps for contact tracing.
“I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do. […] It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.”
Snowden mentioned above that an app “[…] could have slapped together in four days by a bunch of university researchers working together”, but I think he’s drastically underestimating the effort involved, how difficult it is to develop software at scale. Not only that, but the countries currently discussing apps are also vastly overestimating the capabilities of BlueTooth, NFC, software, and battery life.
Does that damned thing even work? Can it even be made to work reliably? It depends on BlueTooth, right? Is the location data precise enough to say anything about contagion? The image to the right shows my iPhone 6s having been bled dry by the venerable WhatsApp in just 5.5 hours of doing nothing. I remember years of Apple’s message app screwing up the order of messages and double-sending to some devices. It still doesn’t know what to do with non-Apple devices (the SMS barely works in groups).
Analyzing Garmin Connect
Instead of picking only on WhatsApp, let’s take a look at how well-established, years-old software working with dedicated hardware works. This example is from a Garmin ForeRunner 435 with the Garmin Connect app. It deals with health data.
The following screenshots are from the app just a few days ago. The only thing correct on it is the number of steps. Everything else is laughably wrong. Will the new contact-tracing app do any better? Or will it, too, be filled with guesswork and official-looking but ultimately fantastical data?
The watch claims that my high heart rate was 153bpm, but I know it never went that high during the whole walk. In fact, I caught the readout on my watch as I walked into my apartment—and it read 153bpm as I was just standing there. So now it takes the misreading of my heart rate—something you would think would be working reliably by now—and extrapolates all sorts of madness from it.
For example, those intensity minutes. It looks impressive, but I know it rained most of the week and I didn’t go for any runs or longer walks with the watch. Let’s look at the intensity minutes in the next graphic: It’s claiming that of the three hours that I walked today was in a high-intensity zone for double points and 360 total minutes. That’s flat-out bullshit. It was just a walk.
Also, I have no idea where he got the 240 (4 hours) of intensity minutes on Tuesday. I must have worn the watch, but it probably just locked its heart-rate reading at 130 while I was sitting at my desk. On the other hand, I wore the watch for a very intense hour-long workout on Saturday, which doesn’t show up at all.
I walked about 12km today with about 400m of incline, but my watch thinks I only burned 2,222 KCal all day, which seems like it would be a normal amount for a day on which I hadn’t walked 12km.
I don’t wear my watch when I sleep, so the app is 100% guessing on how much I sleep. It consistently guesses between 10.5 and 12 hours, which is off by a good amount. I almost never sleep longer than 8 hours. I know that’s wrong. The app is literally pulling numbers out of thin air.
Trusting false technological Gods
Let’s imagine an app developed as quickly as possible. Tight deadlines do not a reliable app make. WhatApp is an old, established app maintained by one of the best developer teams in the world and even that thing occasionally just drains a phone for no reason. And WhatsApp doesn’t even use BlueTooth or Location Services. Garmin has also been making apps and devices for a long time and their stuff has gotten progressively better. But their stuff isn’t even accurate enough for a hobbyist to use for tracking health data. How is a first-time app going to be more reliable than these apps right out of the gate?
So what’s the problem? The problem is that these apps have been in development for years and their data can’t be trusted, even for fun. The problem is that we then expect a brand new app to do a better job just because we want it to. And this new contact-tracing app is serious business. When it gets its data wrong, people get sick and people die. The problem is that people imbue these devices with God-like awareness and accuracy. Once numbers exist, they must be correct, … right? Right?
We don’t know how to write reliable software
Software is notoriously hard to get right. Hurrying makes it worse. These apps will most likely be developed by amateurs (e.g. Swiss college students are, by definition, amateurs) or jaded professionals, all steeped in a culture of building MVPs (minimal viable products) and using iterative release pipelines to fix things up in production, using customers as beta-testers. Are people aware that they’ll be entrusting their lives or future health to beta software?
The people who know how to build apps have no practice building reliable, bullet-proof software. Their motto is “fail early, fail often” and they iteratively get better over time. Sometimes. Often enough, software hits a local maximum that isn’t very high at all, ending up in the doldrums of “good enough”. We don’t really know to build software that is great and reliable from the get-go.
The first versions will be atrocious and will most likely torpedo the whole effort. Just ask the SBB (Swiss Railway System), which took years to regain its reputation after a catastrophic nationwide app release. The contact-tracing app will be built by people even less professional than that.
It’s not that it’s impossible, just that it’s highly unlikely. Until it does get reliable, though, decisions will be made based on its shitty data-detection and its incorrect algorithms.
Here come the trolls
And, even then, even given that the app works as advertised: it’s fast, accurate, low-power, and guarantees anonymity, what about misuse? The article Apple and Google detail bold and ambitious plan to track COVID-19 at scale by Dan Goodin (Ars Technica)
“Another possible weakness: trolls might frequent certain areas and then report a false infection, leading large numbers of people to think they may have been exposed. These kinds of malicious acts could be prevented by requiring test results to be digitally signed by a testing center, but details released on Friday, didn’t address these concerns.”
Kids could get out of school by sending in a kid with a phone that alerts as positive. The whole school closes. They’re all quarantined. You could tie your positive-bleating phone to a dog and have it run through a neighborhood. Or you could send it on a drone through a mall. There’s nearly no end to the mischief that unserious people could use to abuse the system.
So where does that leave us?
- Contact-tracing is manpower-intensive, but works without an app.
- Building and deploying an app is not as easy as people are making it sound. It’s not a panacea.
- It’s not clear how an app would help rather than hinder.
- It’s not clear that an app can even be made to work as hoped.
- Even if it does, trolling will be rampant and threaten to ruin trust and data.
- We’re terrible at building reliable software. We’re even worse at it when we hurry.
- The first iterations will be nearly catastrophically bad—each of which could torpedo the whole project by killing people’s trust in the app.
- Once we’re surveilled, we’ll probably always be surveilled. Power doesn’t relinquish power willingly. If we accept an app, then it, and its data-collection, is here to stay.
- That data will be put to other, non-pandemic uses sooner rather than later.
- Those uses will not be to your benefit.
- Trading privacy and liberty for an app that doesn’t have much of a chance of delivering enough compensatory value is a bad deal.
The more-recent article What’s Behind South Korea’s COVID-19 Exceptionalism? by Derek Thompson (The Atlantic) corroborates South Korea’s approach:
↩“Individuals with the most serious cases were sent to hospitals, while those with milder cases checked into isolation units at converted corporate training facilities. The government used a combination of interviews and cellphone surveillance to track down the recent contacts of new patients and ordered those contacts to self-isolate as well.”