When seeking out information about vaccine management systems, one popular place to look is news stories about system failures. These stories can be highly visible, and rightfully raise concern about problems that various jurisdictions have come across in using particular systems.
When doing so, however, it can be helpful to keep the following facts in mind:
Context: What information is typically missing from these reports, and what questions should you ask to find it out?
Defining success: How do you evaluate the "success" of a given system under consideration?
Clarifying difficulties: How do you characterize difficulties that a given system has been experiencing?
Often, news stories are missing critical context. It is important to seek to understand, where possible, certain more specific facts in order to better understand what has happened and what can be done about it. In particular, ask:
What system is being used? Consult with us to find more information about which jurisdictions are using which systems, if it is unspecified.
When you find other systems, check what other jurisdictions are using the same system, and whether they have come across similar issues.
Keep in mind that different systems may be in use for different aspects of the scheduling process - check tool categories and vendor categories for more information.
What specific problems are people encountering?
Beware of catch-all terms like "bugs". Instead, check section common challenges to see if this challenge is unique to the system under discussion. Keep in mind that systems vary in their ability to address these challenges - please reach out to ask for detailed vendor evaluations.
Exactly how many people are impacted, and what populations do they represent?
A small number of very passionate people can make a loud splash, but
Even if there are only a small number of people impacted, especially if they represent underrepresented groups or those most in need of the vaccine their concerns should be taken very seriously.
If related to downtime, ask a few questions:
How long was the system down?
Take particular note of outages lasting more than 10 minutes at a time
Outages longer than this are unacceptable, and avoidable with modern best-practices in Site Reliability Engineering
Have there been multiple outages?
These should be infrequent, and decrease in frequency over time if they happen at all.
Was the root cause identified and rectified? Is there mention of a post mortem?
This kind of open discussion of system outages is expected in a modern software development environment.
How did the vendor respond to these issues?
Systems should be robust to high-traffic situations. If they are not, the system is at fault, not the audience using it.
Be careful if the vendor pinpoints "user error" as a cause - these systems should work with a variety of audiences with a variety of backgrounds, and where necessary, should provide training in how to use them.
When a system is praised, particularly in news media, it is important to seek clarity on the following questions, to better understand the nature of this success:
Has the system been stress tested under real, significant load?
Sometimes systems can declare victory too early, before real users have attempted to use them. See if you can find information on the exact number of providers and end-users currently using this system
How many providers have signed up with the tool?
Difficulties in integrating with individual system EHRs and IISs becomes obvious at scale - pay close attention to how many jurisdictions, systems, and sites within jurisdictions that a given tool has integrated with
Who is writing the article or providing advice?
Be aware of the source of a given article, it may have come from a PR firm or other entity with an interest in touting success and hiding other indicators.
What brand names have been attached to the effort, and what is the true nature of their involvement?
A brand name in one field does not translate to a brand name in another
This is a unique area with unique challenges - be aware of the nature of the company offering the solution, their history, and background
What are some tangible, numeric measures?
There are a number of factors at play in how "successful" a given jurisdiction has been at administering vaccines. Try to keep an eye on tangible, numeric measures where possible, such as: vaccination rate / 100K and percentage of vaccines administered vs. delivered.
What is correlated vs. caused?
Be aware that there are a number of factors that can make the determination of a "successful" system more complicated. Consider the below when establishing correlation vs. causation of certain numeric indicators:
Percentage of residents receiving vaccination through federally-sponsored sites (mass vaccination sites) and systems (VA, IHS)
Percentage of rural vs. urban residents
Political divisions concerning vaccine hesitancy
Other demographic trends and numbers.
When you see bad news, consider the below caveats in assessing these difficulties:
Both the number of jurisdictions using a system, as well as the number that have switched off of a given system, are not necessarily indicative of the failures of that system
News travels quickly, as do rumors. It can be difficult to conduct a rigorous root cause analysis on a particular problem, particularly when there is time pressure
There is incentive to switch away from a given system as a means of making a statement. But there is often no guarantee that the system you switch to will not also have the same issues. Focus instead on the specific issues that lead to the system switch, and the specific ways the newer systems have addressed these issues.
A new system is not a cure-all
It bears repeating: many problems will not be fixed by moving to a new scheduling system. Work with the vendor to see what can be done within the confines of your current system, and bring up the above as talking points as to the minimum required feature set of this solution. If the vendor is unable to offer these, it might make sense to look elsewhere.
The ability of a system to improve is often more important than its initial mistakes
Mistakes happen. But look for the trajectory of a given solution - does it have fewer issues over time? Is it adding features and fixing bugs to address user complaints, and are those features and bugs actually making a tangible difference?