March 16, 2024

Reinvented

Following my 2017 blog entry, Reinvention, where I had looked back to recount my jump from industry back to academia. Here is a video from the CSAIL 60th anniversary celebration where I finish telling my personal academic story about a career reinvention.

If you watch it to the end, you can see the three big lessons about how to do research that I learned during my PhD - and how I learned those lessons.

Continue reading "Reinvented"
Posted by David at 05:27 PM | Comments (0)

October 28, 2023

Function Vectors in Large Language Models

In 1936, Alonzo Church made an amazing discovery: if a function can treat other functions as data, then it becomes so powerful that it can even express unsolvable problems.

We know that deep neural networks learn to represent many concepts as data. Do they also learn to treat functions as data?

In a new preprint, my student Eric Todd finds evidence that deep networks do contain function references. Inside large transformer language models (like GPT) trained on ordinary text, he discovers internal vectors that behave like functions. These Function Vectors (FVs) can be created from examples, invoked in different contexts, and even composed using vector algebra. But they are different from regular word-embedding vector arithmetic because they trigger complex calculations, rather than just making linear steps in representation space.
It is a very cool finding. Help Eric spread the word!

Read and retweet the Twitter thread
Share the Facebook post
The project website: functions.baulab.info


Posted by David at 11:17 AM | Comments (0)

April 02, 2023

Is Artificial Intelligence Intelligent?

The idea that large language models could be capable of cognition is not obvious. Neural language modeling has been around since Jeff Elman’s 1990 structure-in-time work, but 33 years passed between that initial idea and first contact with ChatGPT.

What took so long? In this blog I write about why few saw it coming, why some remain skeptical even in the face of amazing GPT-4 behavior, why machine cognition may be emerging anyway, and what we should study next.

Read more at The Visible Net.


Posted by David at 03:08 PM | Comments (0)

March 28, 2023

Catching Up

Today, I received an email from my good college friend David Maymudes. David got his math degree from Harvard a few years ahead of me, and we have both worked at Microsoft and Google at overlapping times. He is still at Google now. We have both witnessed and helped drive major cycles of platform innovation in the industry in the past (David designed the video API for windows and created the AVI format! And we both worked on Internet Explorer), so David is well aware of the important pieces of work that go into building a new technology ecosystem.

From inside Google today, he is a direct witness to the transformation of that company as the profound new approaches to artificial intelligence become a corporate priority. It is obvious that something major is afoot: a new ecosystem is being created. Although David does not directly work on large-scale machine learning, it touches on his work, because it is touching everybody.

Despite being an outsider to our field, David reached out to ask some clarifying questions about some specific technical ideas, including RLHF, AI safety, and the new ChatGPT plug-in model. There is so much to catch up on. In response to David’s questions, I wrote up a crash-course in modern large language modeling, which we will delve into in a new blog I am creating.

Read more at The Visible Net.


Posted by David at 05:44 AM | Comments (0)

December 28, 2021

Running Statistics for Pytorch

Here is runningstats.py, a useful little module for computing efficient online GPU statistics in Pytorch.

Pytorch is great for working with small batches of data: if you want to do some calculations over 100 small images, all the features fit into a single GPU and the pytorch functions are perfect.

But what if your data doesn't fit in the GPU all at once? What if they don't even fit into CPU RAM? For example, how would you calculate the median values of a set of a few thousand language features over all of Wikipedia tokens? If the data is small, it's easy: just sort them all and take the middle. But if they don't fit - what to do?

import datasets, runningstats
ds = datasets.load_dataset('wikipedia', '20200501.en')['train']
q = runningstats.Quantile()
for batch in tally(q, ds, batch_size=100, cache='quantile.npz'):
  feats = compute_features_from_batch(batch)
  q.add(feats) # dim 0 is batch dim; dim 1 is feature dim.
print('median for each feature', q.quantile(0.5))

Here, online algorithms come to the rescue. These are economical algorithms that summarize an endless stream of data using only a small amount of memory. Online algorithms are particularly handy for digesting big data on a GPU where memory is precious. runningstats.py includes running Stat objects for Mean, Variance, Covariance, TopK, Quantile, Bincount, IoU, SecondMoment, CrossCovariance, CrossIoU, as well as an object to accumulate CombinedStats....

Continue reading "Running Statistics for Pytorch"
Posted by David at 02:23 PM | Comments (0)

November 26, 2021

Reddit AMA

Join me at this link on Reddit on Tuesday 3pmET/12PT to #AMA about interpreting deep nets, AI research in academia vs industry; life as a PhD student. I am a new CS Prof at Northeastern @KhouryCollege; postdoc at Harvard; recent MIT Phd; Google, Msft, startups...

It is graduate school application season! So with prospective PhD students in mind, I am hosting an AMA to talk about life as a PhD student in computer vision and machine learning, and the choice between academia and industry. My research studies the structure of the computations learned within deep neural networks, so I would especially love to talk about why it is so important to crack open deep networks and understand what they are doing inside.

Before I start as a professor at Northeastern University Khoury College of Computer Sciences next year, I am doing a postdoc at Harvard; and you can see my recent PhD defense at MIT here. I have a background in industry (Google, Microsoft, startup) before I did my own "great resignation” to return to school as an academic, so ask me anything about basic versus applied work, or research versus engineering. Or ask me about “grandmother neurons,” making art with deep networks, ethical conundrums in AI, or what it's like to come back to academia after working.


Posted by David at 11:21 AM | Comments (1)

August 25, 2021

Assistant Professor at NEU Khoury

I am thrilled to announce that I will be joining the Northeastern University Khoury College of Computer Science as an Assistant Professor in Fall 2022.

For prospective students who are thinking of a PhD, now is a perfect time to be thinking about the application process for 2022. Drop me a note if you have a specific interest in what our lab does. And if you know somebody who would be a fit, please share this!

http://davidbau.com/research/
https://www.khoury.northeastern.edu/apply/

We think that understanding the rich internal structure of deep networks is a grand and fundamental research question with many practical implications. (For a talk about this, check out my PhD defense). If this area fascinates you, consider applying! The NEU Khoury school is in downtown Boston, an exciting, international city, and the best place in the world to be a student.


Posted by David at 06:48 PM | Comments (1)

August 24, 2021

PhD Defense

Today I did my PhD defense, and my talk will be posted here on youtube. Here is the talk!

Title: Dissection of Deep Networks

Do deep networks contain concepts?

One of the great challenges of neural networks is to understand how they work. Because a deep network is trained by an optimizer, we cannot ask a programmer to explain the reasons for the specific computations that it happens to do. So we have traditionally focused on testing a network's external behavior, ignorant of insights or flaws that may hide within the black box.

But what if we could ask the network itself what it is thinking? Inspired by classical neuroscience research on biological brains, I introduce methods to directly probe the internal structure of a deep convolutional neural network by testing the activity of individual neurons and their interactions.

Beginning with the simple proposal that an individual neuron might represent one internal concept, we investigate the possibility of reading human-understandable concepts within a deep network in a concrete, quantitative way: Which neurons? Which concepts? What role do concept neurons play? And can we see rules governing relationships between concepts?

Following this inquiry within state-of-the-art models in computer vision leads us to insights about the computational structure of those deep networks that enable several new applications, including "GAN Paint" semantic manipulation of objects in an image; understanding of the sparse logic of a classifier; and quick, selective editing of generalizable rules within a fully trained StyleGAN network.

In the talk, we challenge the notion that the internal calculations of a neural network must be hopelessly opaque. Instead, we strive to tear back the curtain and chart a path through the detailed structure of a deep network by which we can begin to understand its logic.


Posted by David at 03:13 PM | Comments (0)

August 06, 2021

Global Catastrophizing

Do you think the world is much darker than it used to be? If so, you are not alone.

I have always assumed that a feeling of psychological decline is just a side-effect of getting older. But a paper by Bollen, et al., out in PNAS today suggests that a darker outlook in recent years might not be specific to any of us individually. By analyzing trends in published text in the Google ngrams corpus, researchers from Indiana University and Wageningen have discovered that across English, Spanish and German, text published in the world shows sudden changes in language use over time that are indicative of cognitive distortions and depression, coinciding with major wars or times of social upheaval.

The chart above is from Bollen's paper, and it counts something very simple. For every year, it counts how many times a particular set of short phrases appear in the printed books published in that year. The annual counts come from Google's Books ngram corpus - derived from scans of published books - and the 241 phrases counted are word sequences chosen by a panel of cognitive behavioral therapy specialists as markers for cognitive distortion schemata (CDS). That is, they are phrases that would suggest systematic errors in thinking that are correlated with mental health issues that are treated by psychologists.

For example, one of the 241 counted phrases is "you always," because those words often indicate overgeneralization such as in the sentences "You always say no," or "You always win." The bigram "everyone knows" indicates mind-reading, because it reveals that the speaker has a belief that they know what other people are thinking. The trigram "will never end" indicates catastrophizing, an exaggerated view of negative events. In total the panel of experts cataloged each of the 241 selected phrases, as a marker for a dozen specific cognitive distortions. These cognitive distortions are correlated with depression, so it is interesting to ask whether large-scale trends in the usage of these phrases reveals mass changes in psychology over time.

The chart suggests that it might. It seems to reveal suffering in Germany coinciding with World Wars I and II, and trauma in the English-speaking world at the Spanish-American War, Vietnam war protests, and 9/11. Strikingly and worryingly, all the languages show a dramatic increase in cognitively distorted use of language since 2007. If you believe the linguistic tea leaves, our collective state of mind seems to have taken an extraordinary turn for the worse in the last decade — globally.

The paper is an example of a use of the Google Books ngrams corpus. This is a pretty great resource that catalogs language use by counting words and ngrams in about 4% of all published text, by year, and it means that you can easily look further into the data yourself. The authors provide their list of phrases, so you can examine the trends by individual category and phrase. Here are the top phrases for catastrophizing in English:



Explore and modify the query yourself here. You can see spikes in certain phrases corresponding to WWI and WWII, and the upwelling, in recent decades, of expressions of the idea that a variety of things "will end" and "will not happen".

It is when the 241 phrases are added together, when we see dramatic recent spikes that are reminiscent of the climate change hockey stick plot by Mann, Bradley and Hughes.

Do you agree with the authors that these changes in word usage are meaningful? Have we been experiencing a catastrophic worldwide decline in psychological health since 2007?

Or is this just an example in which the authors themselves are catastrophizing, looking at data in a way that interprets events in the world as much worse than they actually are?

Previous musings on society-wide catastrophizing here.


Posted by David at 08:10 AM | Comments (0)

March 18, 2021

Passwords should be illegal

As part of modernizing U.S. infrastructure, America should eliminate passwords.

Our use of passwords to build security on the internet is akin to using flammable materials to build houses in densely-populated cities.  Every single website that collects, stores and transmits password invites a new cybersecurity catastrophe.

When half of Chicago burned down in 1871, citizens reflexively blamed the disaster on evil actors: arsonists, immigrants, communists.  After the fire, the first response of political leaders was to impose martial law on the city to stop such evil-doers.  From our modern perch, it seems obvious that the blame and the fix was misplaced.  Even if the spark were lit by somebody with bad intentions, the scale of the disaster was caused by outdated infrastructure.  Chicago had been built out of combustible materials that were not safe in a densely-built city.

Our continued use of passwords on the internet today poses the same risk.

Just as a small fire in a flammable city can turn into a massive disaster, on the internet, a single compromised password can lead to a chain reaction of compromised secrets that can open vast parts of the internet to hacking.  The fundamental problem is that we store and transmit many of the secrets that we use to secure the internet, including passwords, on the internet itself.

In the 2020's using, transmitting, and storing passwords on the internet should be as illegal as constructing a Chicago shanty out of incendiary cardboard.

Physical key-based authentication systems are cheap.  They keep secrets secure on computer chips that are not connected to the internet and that never reveal their secrets on the network.  If physical keys were used everywhere we currently use passwords, all internet hacking would be far harder and slower.

Key-based login systems have been available for decades, but because standards are not mandated, they are adopted almost nowhere.  Physical keys are slightly more inconvenient for system-builders, and consumers do not demand them because the dangers of hacking are invisible.  It is an excellent example of a situation where change is needed, but the marketplace will not create the change on its own.

That is why our country's best response to the increasing wave of hacking disasters should be led by people like the folks at NIST, rather than the U.S. Army.  We should standardize, incentivize, mandate, and fund the use of non-password based authentication in all computer systems over the next few years. A common set of standards should be set, so that people can log into all systems using trustworthy physical keys that cannot be hacked remotely.

Eliminating passwords would make more of a difference to cybersecurity than any clever retaliation scheme that the cybersecurity soldiers might cook up.  Although there are certainly evil actors on the internet, we ourselves are the ones who empower hackers by perpetuating our own dangerous practices.

As we modernize U.S. infrastructure, we should prioritize modernizing standards and requirements around safe authentication systems on the internet.



Posted by David at 12:28 PM | Comments (2)

October 16, 2020

Deception is a Bug

Today Twitter and Facebook decided to manually limit the spread of the NY Post's unverified story about a hack on the Biden family. Taking responsibility for some of the broad impacts of their systems is an excellent move.

But the fact that FB+Twitter needed to intervene is a symptom of badly flawed systems. We all know that the systems would have otherwise amplified the misinformation and caused widespread confusion. In other words, we all know our big social networks have a bug. It is a fundamental bug with ethical implications - but in the end, it is a bug, and as engineers we need to learn to fix this kind of issue. As a field, we need to be willing to figure out how to design our systems to be ethical. To be good.

What does it mean for an AI to be good?

The fundamental reason Twitter and Facebook and Google are having such problems is that the objectives used to train these systems are wrong. We can easily count clicks, minutes of engagement, re-shares, transactions. So we maximize those. But we know that these are not actually the right goals.

The right goal? In the end, a system serves users, and so its purpose is to expand human agency. A good AI must help human users make better decisions.

Yet improving decisions is quite a bit harder than maximizing page views. It requires getting into subtle issues, developing an understanding of what it means to be helpful, informative, honest. And it means being willing to take on tricky choices that have traditionally been the realm of editors and policymakers. But it is possible. And, as a field, it is what we should be aiming for.

A few more thoughts in previous posts:
The Purpose of AI
Volo Ergo Sum


Posted by David at 12:42 PM | Comments (1)

August 19, 2020

Rewriting a Deep Generative Model

Can the rules in a deep network be directly rewritten?

State-of-the-art deep nets are trained as black boxes, using huge piles of data and millions of rounds of optimization that can take weeks to complete. But what if we want change a learned program after all that training is done?

Since the start of my PhD I have been searching for a way to reprogram huge deep nets in a different way: I want to reconfigure learned programs by plugging pieces together instead of retraining them on big data sets. In my quest, I have found a lot of cool deep net structure and developed methods to exploit that structure. Yet most ideas do not work. Generalizable editing is elusive. Research is hard.

But now, I am delighted to share this finding:

Rewriting a Deep Generative Model (ECCV 2020 oral)
Website: https://rewriting.csail.mit.edu
Code: https://github.com/davidbau/rewriting
Preprint: https://arxiv.org/pdf/2007.15646.pdf
2 min video: https://www.youtube.com/watch?v=i2_-zNqtEPk


Editing a StyleGANv2 model of horses. After changing a rule, horses wear hats!

Unlike my GanPaint work, the focus of this paper is not on changing a single image, but on editing the rules of the network itself. To stretch a programming analogy: in previous work we figured out how to edit fields of a single neural database record. Now we aim to edit the logic of the neural program itself.

Here is how: to locate and change a single rule in a generative model, we treat a layer as an optimal linear associative memory that stores key-value pairs that associate meaningful conditions with visible consequences. We change a rule by rewriting a single entry of this memory. And we enable users to make these changes by providing an interactive tool.

I edit StyleGanV2 and Progressive GAN and show how to do things like redefine horses to let them wear hats, or redefine pointy towers to have buildings that sprout trees. My favorite effects are redefining kids eyebrows to be very bushy, and redefining the rule that governs reflected light in a scene, to invert the presence of reflections from windows.

Here is a 10 min video:
https://rewriting.csail.mit.edu/video/

MIT news has a story about it here today:
http://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818


Posted by David at 11:11 AM | Comments (2)

July 05, 2020

David's Tips on How to Read Pytorch

Pytorch has a great design: easy and powerful. Easy enough that it is definitely possible to use pytorch without understanding what it is doing or why. But it also gets better the more you understand.

As part of summer school at MIT, next week I'm doing a lecture to introduce students to pytorch. I have written a few code examples that I hope will give students a head start on understanding the design of pytorch. Each concept is illustrated visually with a cute minimal hackable example. All the examples are notebooks that are hosted on Google Colab.

It covers tensor indexing conventions, benchmarks gpu versus cpu speeds, explains autograd with simple systems, and plots what optimizers are doing using 2d problems. Then I put the pieces together with a detailed discussion of network modules and data loaders, training toy networks where the whole space can be visualized as well as a simple but realistic five-minute ResNet training example.

Here are David's Tips on How to Read Pytorch.


Posted by David at 02:58 PM | Comments (3)

April 25, 2020

A COVID Battle Map

Whenever Heidi gets a headache after coming back from the hospital, I worry about losing her to COVID.

But I am very aware that, with the virus already so widespread, the decisive battle is no longer being fought by doctors in the hospitals. They are just buying time, containing the threat just like you and I do when we social distance.

The outcome will depend on a race between two global teams furiously trying to hack a dozen proteins. The good guys are thousands of biologists, an historic worldwide collaboration. The bad guys are the random forces of natural selection, the mutations that happen inside each carrier. Thanks to the Bedford lab at Fred Hutchinson, you can see a map of the battlefield here, tracing the random moves made by the bad guys: (data from GISAID)

What are the bad-guy mutations doing? A small study came out of Zhejiang university this week (medrxiv, not peer-reviewed) that hints at the risks as we let the virus propagate and evolve. They did cell-culture studies on 11 samples and found, for example, a 19-fold difference in cell-culture virulence between one version similar to the virus in WA, CA, OR, and VA (not very virulent) compared to one resembling strains in England and France (far more virulent). One of the versions from Wuhan was 249 times worse. (Strains common in NY or Italy were not included.)

So as we celebrate that WA state seems to be beating the virus, this study highlights that WA has just beaten one strain. The European strains spreading elsewhere are different and might actually be more deadly. I think is important to contain covid before an even-worse strain spreads, as happened in 1918.

Happily, in 2020, we can map out a set of weak points that the good guys can counterattack. Here is a survey paper. Some notable targets:

  1. Attacks on the ACE2 receptor, the molecular passkey used by the virus to break into human cells.
  2. Attacks on the viral replication machine, the intricate RNA-dependent-RNA polymerase RdRps/NSP12.
  3. Attacks on a key link in the viral factory, the protease 3CLpro/NSP5 that cleaves out the viral proteins after they are made in one big chain.
  4. Old-fashioned attacks on the virus armor. Vaccines target the S protein shell on the outside of the virus.
  5. New-fangled behind-enemy-lines attacks by CRISPR hotshots who want to directly chop up viral RNA.
  6. Some scientists are working on defenses that improve the human body's response, steeling our organs to viral attack by trying to calm the inflammation that causes such problems.
  7. Others are working on defenses that transplant a more robust immune response, via donated plasma.

The New York Times has beautiful renderings of all the molecular attack targets.

Unlike in a shooting war, we do not have news reporters going into the battlefield to report on the days wins and losses. But maybe we should. None of these is sure thing. But they all have a chance, and there are real salvos being launched on each of these targets.

On both sides, the battlefield is active.

on facebook here


Posted by David at 06:45 PM | Comments (1)

March 25, 2020

COVID-19 Chart API

Here is the COVID-19 Live Chart API. Use it to create a custom live chart of COVID-19 stats on a linear or logarithmic scale, comparing the set of countries and states that you choose (or an automatically sorted set of worst states or countries), on the timeframe that you want to see.

New 3/27/20: You can now plot local data of most US Counties. Just type the counties, states, and countries you want to see into the search box, and you can make a custom graph focused on the localities you care about.

It is designed to help you use current data to anticipate the future. Click on "advanced options" on covid19chart.org. It just takes a few clicks to make a new visualization.

Once you have created a custom chart, you can email it or print it for your local policymaker. Or better yet, if you are making a dashboard that leaders will see every day, theme the graph dark or light to match your webpage, use the "bare" mode for embedding it as an iframe, like this:

<iframe src="https://covid19chart.org/#/?start=%3E%3D50&include=WA%3BMA%3BNY%3BItaly&top=0&domain=Intl&theme=dark&bare=1" width="500" height="388"></iframe>


(The embedded chart is interactive.)

The data is live, pulled directly off Johns Hopkins CSSE COVID data feed on github. Although that feed is in flux and changes format every few days, I will track their changes and the chart up-to-date as needed. Please email me (david.bau@gmail.com) if you have any problems with this API.

The current data tell a simple story.

In the US, if we want to avoid a grim future, we need to be making better decisions now. Every state of our country is seeing a similar exponential explosion, just starting on different days. Please use these charts to tell this story. And thank you for helping our leaders understand the importance of our choices today.

Continue reading "COVID-19 Chart API"
Posted by David at 10:46 AM | Comments (5)

March 24, 2020

The Beginning

Today marks the beginning of the COVID-19 crisis for me. It is the first day that surgeons are being called in from their regular duties to take care of the wave of COVID-19 patients at MGH. Heidi needs to run into the hospital. We will have weeks or months of this ahead.

I am terrified.

The COVID-19 chart has been updated to include both state-level and international statistics, and it includes an API so that you can make, link, and embed a custom chart that focuses on the states or countries of your choice. The (no doubt stressed-out) CSSE team has been screwing up the data feed, but I will keep the data cleaned and correct on the live chart as long as it can be patched together. Below we can see America first in the chart today.

Please use it as a tool to pressure your local policymakers to take this crisis seriously.

Thank you.


Posted by David at 10:40 AM | Comments (1)

March 23, 2020

No Testing is not Cause for Optimism

Two readings and a thought related to covid-19 testing.

Lack of information requires us to believe two contradictory things at once. From a policy point of view, we need to understand that very few people are infected yet. And from a personal behavior point of view, we need to understand that many people are already infected.

Policy first. Some people think that the lack of testing means that there could be far more asymptomatic cases than we know, and therefore the disease could far less deadly than we imagine.

But consider the case of the town of Vò, near the epicenter of the Italian outbreak, where all 3000 residents were tested. As severe as the outbreak is in Italy, it corresponded to less then 3% of the population being infected. So as bad as the Italian case is, at least in the one town that was tested, it could be 30 times worse. Blindness is not cause for optimism.

Which individuals should be tested? The right behavior is to do the things that maximize lives saved. That means testing should be done in situations where it would change care, for example on on healthcare providers who do not have the option to isolate, so that they do not inadvertently spread it to other providers and patients.

But of course that means many infected people will be untested, so everybody needs to operate under the assumption that we are all infected.

Paradoxically this lack of information means we need to keep in mind two different realities at once. First, we need to recognize that almost nobody has it yet, so the society-wide damage can and will get far far worse; and second, that we and others are likely to have it, so our personal risk and responsibility is very high. We need to isolate.

The parable of two realities corresponds to the logarithmic and linear view of the disaster. I have posted an updated version of the covid-19 time series tracker, which provides both views on covid19chart.org.


Posted by David at 01:45 PM | Comments (0)

March 22, 2020

Two Views of the COVID-19 Crisis

I have posted an interactive chart of USA COVID-19 cases.

This chart lets you see coronavirus data from two different points of view: the policymaker's view, and the doctor's view.

For policymakers, the chart lets you see USA data in the same way the Financial Times COVID-19 plot by John Murdoch compares policies internationally. Select the logarithmic totals with a '>=100' starting threshold, so that "day zero" is the first day there are 100 cases in a state. Over time, if different states' policies have different effects on the growth of the virus, the exponents, and therefore the slopes, will reveal the differences.

The other point-of-view is the doctors-eye view. Doctors must deal with the patients who walk into the ER and who who lie sick in ICU beds. To anticipate these numbers, switch to the 'delta linear' view in the current month. The spikes show why the panic is justified, and why minor policy changes have massive ramifications.

The takeaway? The chart re-emphasizes the point that this is not a game. There is a huge gap between the "policymaker's" view and the "doctor's" reality on the ground. Slight changes from a policymaker's point of view have massive ramifications for doctors.

After our leaders negotiate about a "gradual" shutdown of car factories, Michigan illnesses explode. After beaches stay open for one last lucrative spring break party, Florida cases skyrocket. And what begins as a local outrage will become a healthcare shortage, then a nationwide menace. A single idiotic master of the universe could trigger an outbreak that will use up the ventilators that would have saved your grandfather.

In our 50 states we are all linked. Despite dramatically different local policies, it is likely that our rate of infection growth will be largely the same across the country. In coming days, this chart will tell the story of our national interconnectedness.

Please. We need to take the crisis more seriously than we are. Our corporate, city, state, and federal leaders are not doing enough. "Social distancing" of the coastal elite needs to give way to a much more universal regime of physical isolation, enforced shutdowns, shifting of priorities, deferral of debts, and testing, testing, testing, nationwide.

The graph automatically updates every day based on current data. Please share. And please isolate.

I made the chart to help Heidi (who is a surgeon at MGH) see summaries of some USA statistics that are not being plotted in the media. The code is open-source at github.com/davidbau/covid-19-chart. It is just a bit of HTML and JS, and should be easy to extend to show more information. Pull-requests are welcome.


Posted by David at 01:57 PM | Comments (3)

Calendar
March 2024
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
Projects
Search

Recent Entries
Reinvented
Function Vectors in Large Language Models
Is Artificial Intelligence Intelligent?
Catching Up
Running Statistics for Pytorch
Reddit AMA
Assistant Professor at NEU Khoury
PhD Defense
Global Catastrophizing
Passwords should be illegal
Deception is a Bug
Rewriting a Deep Generative Model
David's Tips on How to Read Pytorch
A COVID Battle Map
COVID-19 Chart API
The Beginning
No Testing is not Cause for Optimism
Two Views of the COVID-19 Crisis
The Purpose of AI
npycat for npy and npz files
In Code We Trust?
Net Kleptocracy
It's Our Responsibility
Volo Ergo Sum
A Crisis of Purpose
Reinvention
Government is Not the Problem
Oriental Exclusion
David Hong-Toh Bau, Sr
Dear Senator Collins
Trump is a Two-Bit Dictator
Network Dissection
Learnable Programming
Beware the Index Fund
Does Watching Fox News Kill You?
Our National Identity
Outrage is Not Enough
A Warning From 1937
Nativist?
A Demon-Haunted World
By the People, For the People
Integrity in Government
Thinking Slow
Whose Country?
Starting at MIT
When to Sell
One-Off Depreciation
Confidence Games
Making a $400 Linux Laptop
Teaching About Data
Code Gym
Musical.js
Pencil Code at Worcester Technical High School
A Bad Chrome Bug
PhantomJS and Node.JS
Integration Testing in Node.js
Second Edition of Pencil Code
Learning to Program with CoffeeScript
Teaching Math Through Pencil Code
Hour of Code at Lincoln
Hour of Code at AMSA
A New Book and a Thanksgiving Wish
Pencil Code: Lesson on Angles
Pencil Code: Lesson on Lines
Pencil Code: a First Look
CoffeeScript Syntax for Kids
CSS Color Names
For Versus Repeat
Book Sample Page
Teaching Programming and Defending the Middle Class
TurtleBits at Beaver Country Day
Book Writing Progress
Lessons from Kids
Await and Defer
Ticks, Animation, and Queueing in TurtleBits
Using the TurtleBits Editor
Starting with Turtlebits
Turtle Bits
No Threshold, No Limit
Local Variable Debugging with see.js
Mapping the Earth with Complex Numbers
Conformal Map Viewer
Jobs in 1983
The Problem With China
Omega Improved
Made In America Again
Avoiding Selectors for Beginners
Turtle Graphics Fern with jQuery
Learning To Program with jQuery
jQuery-turtle
Python Templating with @stringfunction
PUT and DELETE in call.jsonlib.com
Party like it's 1789
Using goo.gl with jsonlib
Simple Cross-Domain Javascript HTTP with call.jsonlib.com
Dabbler Under Version Control
Snowpocalypse Hits Boston
Heidi's Sudoku Hintpad
Social Responsibility in Tech
The First Permanent Language
Archives
Links
Mastodon
Older Writing
About