Category Archives: Analysis

What would you say ya do here?

I volunteered for the career day for the 11th and 12th graders at my son’s school and they asked me to explain what I do:

More specifically, they asked me to “please provide a brief job description and list the most important aspects of your current job. This will help our students understand what you do on a daily basis.”

Wanting to be completely honest with these kids who are about to try to pick a school, pick a major, figure out a career, I sent them this:

 

I lead a plucky team of data scientists, engineers, and analysts in finding undeclared nuclear R&D around the world. We built/bought/integrated the software, begged/borrowed/took the data, fused it together into something that can swing a billion data records at our most difficult questions, and trained people in how to wield the tools we’d built.

Disciplines involved:
– large-scale data analytics
– information modeling
– programming
– machine learning
– information visualization
– persuasion
– patience
– impatience
– not knowing when to quit

What would you say?

The War on Terror is over. Terror won.

The country which claims to lead the free world now openly spies on its allies and its own citizens. The rationale? Terrorism and crime.

The President-Elect won on a platform that stated America is in trouble, she is in danger, and your neighbor, your neighboring country, your neighboring religion, your neighboring race, your own allies…they are the danger. What danger? Terrorism, crime, and economic ruin.

Parents risk losing custody of their children if they allow them to explore their own neighborhoods. Why? It’s a dangerous neighborhood. Strangers are dangerous, and they are everywhere. Better your children be taken by CPS than by a human trafficking ring.

We spend billions and have invented almost an entire new branch of government with a slew of new acronyms…DNI, NCTC, DHS, TSA to name a few. Terrorism.

Where once we worried about Africanized bees, now we see the spread of militarized police. Crime.

Cameras, ubiquitous now, are not welcome near the actions of these entities. Drones recording the #NoDAPL protests are grounded by the FAA. We can’t see what the militarized police are doing.

We imprison more of our people per capita than any other developed nation.

But terrorism is a danger on par with texting while driving in number of annual deaths. Perhaps we need to reduce the WoT to a few ad campaigns and PSAs. Or maybe create half a dozen new federal agencies (armed, with drones) to fight the WoTWD.

Crime is down too. Actual stranger danger is so rare as to be almost nonexistent. Your child is in more danger at home, indoors.

The economy is doing well. The rich are getting richer faster than anyone, but everyone else is doing better. Global poverty continues to drop dramatically. Goods are cheaper.

There are localized differences. Deaths by gun are high in the US. Deaths by armed conflict are high in the Middle East. Deaths by disease are high in Africa. Factory workers are losing jobs to automation even as Uber makes anyone with a car employable. But globally, and nationally in the US, we are living better than ever before. 

Our greatest actual dangers are actually problems of affluence: we have so much cheap food we are eating ourselves to death, and we have so many cars whizzing around that we routinely smash them into each other with people inside.

We are not acting rationally. We are reacting from fear. We live in fear. But we are like a child who is too scared to go down into the dark cellar, but has to be held back from running into traffic. Irrational.

The only winners are those who profit from fear.

Signal to noise

When people talk about information analysis, there’s often a lot of worry about noise in the data, and the reliability of the data sources. So when you’re building an information analysis system, there are often requirements that have to do with “filtering out bad data” or assigning “reliability scores” to data sources.

But in practice this isn’t usually necessary. With enough data, noise suffers from destructive interference, and signal interferes constructively. I first learned about this in physics class in school, and first encountered it while doing radio astronomy. I was a research assistant on a team that realized we had an opportunity to make the highest quality radio maps ever made of certain galaxies. See, when big stars go supernova, astronomers like to aim the Very Large Array of radio telescopes in New Mexico at them to study how the massive explosions grow and change. They get a lot of data, write their papers, and move on. But supernovas happen in galaxies. And scientists share data. So our PIs realized they had very long exposures of the galaxies containing the supernovae. Radio astronomy “pictures” tend to be noisier than pictures taken by optical telescopes, and noise in a picture of a distant galaxy can overwhelm the detail of star formation regions and the like. However, noise is random, whereas the bright spots (except for the supernova) don’t change noticeably. So if you take the sum of enough radio data, the random noise cancels out, and the actual bright regions become clear. Note, this is not the same thing as astronomical interferometry, although that was a technique we made use of.

The same thing is true of many other types of noisy data. If you only ever look at the data at a point in time, or watch or listen to it as it changes, it’s difficult to see the signal through the noise, but if you have a system that allows this summation to happen, and you can look at the sum, suddenly the picture becomes clear.

Suppose the “ground truth” looks like this:

A draftsman I'm not.

But we have noisy data that looks like this:

I hope that's not an EKG.

If we layer lots of noisy data, we can start to see that the signal’s there…

I drew this better on the whiteboard last week.

But if we can sum the data it looks something like this:

"The hands acquire shakes, the shakes become a warning. It is by caffeine alone I set my mind in motion." --

Now we can clearly see the signal! Is it truth? Not necessarily, but the analyst can now see that there is agreement across the data. If you want more information, you also need a system that gives you a way to dive into the details of what data contributed to the peaks. And now you also have guidance as to where to collect more data, ideally from additional sources. More on that in another post.

This is related to why Google is so good and your organization’s internal Search is so bad. Even though Google’s data source, the Internet, is way noisier than your organization’s intranet (I hope), Google is still better. This is true even if you have an in-house Google appliance. It’s because of Google’s second big data source (equally noisy): the billions of user clicks. Google doesn’t show you the sum of that data, but it does use that aggregation to decide what to show you. In essence, Google finds the peaks of agreement among billions of clicks and shows you just the peaks. Your IT department doesn’t have enough click-data to do that for your organization, no matter how good their software is.

Reliability Scores

You also probably don’t need to assign reliability scores to your data sources, even though it seems like a perfectly logical, even prudent, thing to do. The problem is that the scores will be fairly arbitrary, hard to agree on, and may present a false sense of rigor where there isn’t any. There’s lots of ways a data source can be unreliable, but we’ve found different ways to handle them that avoid these problems. For example:

Problem: There’s hardly any signal (useful information) in the data.
Solution: You’re not going to get any peaks even when you sum it all together. Don’t score it, ditch it.

Problem: There’s a lot of mistakes or inconsistencies in the data.
Solution: That’s the kind of noise that cancels out if you have enough data. If you do, then don’t sweat it.

Problem: The data has been deliberately redacted to remove what you’re looking for.
Solution: The more data there is, the harder it is to do this perfectly. If you get enough of it, you can find what was missed. Also, if you have enough of it, you’ll see mysterious quiet areas of data, because not only is the signal gone, but so is the noise. So you can detect the obfuscation, and you might even be able to catch the deceivers in a mistake.

Problem: The data is out-of-date.
Solution: This is absolutely relevant to the analysts, and it should be documented, but it’s not something you score. The analysts just need to know because data timeliness matters more for some questions than others.

Problem: There are gaps in the data coverage.
Solution: Again, it’s relevant, and should be documented, but it’s not a “reliability” issue. Maybe there weren’t enough 18-25 year-olds in the medical study you’re analyzing. Even so, if there’s a statistically significant result visible for 25-35 year-olds, you’ve still found something; you just don’t know if it works for young people.

Problem: The data’s useful, but its noise is obscuring the signal of other, cleaner data sets.
Solution: Let the users turn data sets on and off as they choose. See, this is actually what people have in mind with reliability scores – they imagine they’ll either weight more reliable data higher, or the user will use the score to decide what to look at. It’s true, you might end up doing some form of weighting. For example, a search engine might weight clicks higher for users who appear to come from a part of the world that speaks the language the results are in, so clicks from Italy have more effect on ranking Italian search results. But you don’t want to do this before you’ve had a chance to work with the data in the system. As for showing the reliability scores to the analysts…believe me, your analysts already have strong opinions on the reliability of individual data sources, and they will ignore your well-intentioned ratings. If you just give them the ability to turn them on and off, they’ll be happy and productive.

In short, signal reveals itself in noisy data if you have enough of it. And have tools that let you work with all of it in aggregate, while still letting you quickly get the details of the revealed signal.