Modernization Hub

Modernization and Improvement
2018 Sentinel Initiative Annual Public Workshop

2018 Sentinel Initiative Annual Public Workshop

(people chattering) – Good morning, everyone. I’m Mark McClellan, I’m the
director for the Duke-Margolis Center for Health Policy,
and I’d like to welcome all of you today in
person and on our webcast to this annual Sentinel
Initiative Public Workshop, which is being hosted by the Duke-Margolis Center for Health Policy and supported by a cooperative agreement with the FDA. This is a special meeting. This year marks the 10th anniversary of the Sentinel Initiative
Annual Public Workshop. Along with that, the 10th anniversary of work at FDA in collaboration
with a broad range of private sector organizations, entities, researchers, industry, and others to implement an post-market
surveillance system at the FDA. Some of you here today have
attended previous workshops, which have provided a
venue for communicating what’s going on in Sentinel
and what the opportunities for further developments are. And also, ideas for bringing
new approaches into Sentinel. Some of you go back further than that. Looking around the room, I see many people who were involved in initially standing up this collaborative system for
developing real-world evidence related to safety and an increasing array of other types of evidence
on medical products. Some of you were here at the beginning, back when the legislation
enabling the Sentinel System was passed with strong bipartisan support, led by Senator Kennedy and
Senator Enzi back in 2007. And some of you were
there even before that. Some of the ideas, before there were terms like real-world evidence
made their way into the potential for a
national system like this. And I’m sure there are many people here who are going to see this
program into a future that has an even bigger
impact through developing real-world data and real-world evidence to impact the effective
use of medical products. So there has been a lot accomplished, but we know that there
is a lot of potential to further advance Sentinel into a robust 21st century system that
generates timely and useful real-world evidence on a
range of evidentiary needs. There also are some challenges to address to achieve this aim, and with the development
of new data sources, new ideas, more experience,
more methodologic expertise, and more potential for collaboration, continued growing pains, even for an approach that is now a decade old. Addressing these challenges
is going to require continued efforts across stakeholders, like what were are trying to do today with the active participation of payers, health systems, industry,
academic partners, and so on, to refine the Sentinel
Initiative’s capabilities. In fact, today, you even
got a glossary of terms, along with your meeting materials, recognizing just how
broad Sentinel has become, with terms like AV and BEST. For those of you who
aren’t familiar with those, you will be by the end of
the day, as you hear more about how the Sentinel
program is extending. And this includes
considering the emergence of innovative tools like
blockchain, cloud computing, artificial intelligence
tools that hold considerable potential for expanding
the impact of this program, but again, have not yet been
fully developed and applied. Today we want to discuss
and explore how innovative ideas like these can
benefit from Sentinel’s existing capabilities,
which you’ll hear about and hear illustrated, in terms of the many recent achievements of the program. And we’re also gonna talk
about a long-term path towards modernizing
and improving the suite of tools that not only
leverages new technologies, but continues to build on
the underlying partnerships and collaborations that have
made Sentinel’s model possible, and that have made it so successful. We’re looking forward
to a robust discussion on all of these issues today, as we focus on Sentinel’s continued development during this meeting on
the Sentinel Initiative. Today, we’re very honored to host Commissioner Scott Gottlieb, who will give a keynote address to get us going. First, I wanna quickly
go through the agenda for today and a few housekeeping items. After the commissioner’s keynote address, the Sentinel leadership
from across FDA centers, the Drug Center, the Biologic
Center, the Device Center, will highlight achievements
and discuss their respective centers’ priorities for the coming year. We’ll then move into a
session that will feature a behind-the-scenes look at
the Sentinel System operations, from key figures who are guiding its use, including the principal investigator of the Coordinating Center,
Rich Platt, and the leadership from a few of Sentinel’s
diverse data partners. After that, we’ll take a short break. And then coming back, we’ll hear from CDER and CBER Sentinel leads
on some key achievements and plans for the further
development of Sentinel’s tools. We’ll then have panelist
reactions from representatives of the FDA’s Office of New
Drugs and private industry on how they’re leveraging Sentinel’s tools and results in their work. There’s a break for lunch. After lunch, we’ll reconvene to discuss ongoing efforts expanding
access to and use of the Sentinel System in
increasingly diverse ways, beyond the direct use by FDA
for some of these important safety issues and
effectiveness issues, now, too. We’ll then take a quick
break before our last session of the day, which will
be a full panel with what I think will be a very
lively discussion on what could be in store for Sentinel
over the next 10 years. I know you’re all, so before we get to the
commissioner’s address, just a few more housekeeping items. I want to remind everyone
that this is a public meeting that is also being webcast online. We welcome all of our
webcast participants. The record of this
webcast will be available on the Duke-Margolis
website following the event. I also want to emphasize
that this is not only an opportunity for everyone to hear about what’s going on from FDA and
other stakeholders involved in the Sentinel program, but
an opportunity for thoughtful questions and comments
from all of you as well. We’ve set aside time in
each session to address comments and questions throughout the day. As a heads-up to our
speakers and panelists, Isha Sharma and Nick Fiore, who are right here in front, Nick, Nick will be right there. Nick and Isha will be here
with cards to indicate how much time you have
left on your presentation. We have a lot to cover, so we are going to stick with the timelines. For those of you who are joining us, please feel free to
help yourself to coffee and beverages outside the
room throughout the day. A reminder that lunch will be on your own, there are a number of
restaurants in the area, and we can give you
some tips on those, too. And we’ll be starting, again,
these sessions on time. Feel free to bring food
back into the room here. Okay, enough on housekeeping,
now to get to the substance. I am very pleased to hand things
over to Dr. Scott Gottlieb, the Commissioner of Food
and Drugs at the FDA. Scott has brought a unique
combination of skills and experiences, some might
say he was made for this job, in terms of the public policy engagement, the expertise on FDA-related issues, the perspectives from private
sector experience, as well. He has been, and I hope he
talks about this a little bit, a strong leader and advocate,
not only for improving public health through
evidence and through making the most of FDA’s
capabilities, but also through collaboration with the private sector. And as I think you’ll
hear, has been involved in the early stages of the
development of this initiative, back when it was a
concept and back through the legislative process
that got Sentinel enacted. He was Deputy Commissioner at that time, and he’s also taken many
steps to expand the use of real-world data and
real-world evidence at FDA. I’ve known him for a long
time and I can’t think of a better person to address this event at such a critical time,
10 year anniversary, of the Sentinel Initiative. Scott, please come on up. (audience clapping) – Thanks a lot, thanks
for having me today. When I first started
working at FDA in 2003, as a senior advisor to Dr. McClellan, I worked on drafting a
lot of his public remarks. And there was one speech in
particular, where I wrote about how we needed an
active surveillance system. We needed to move away from MedWatch. And I remember all the
consternation I got internally, because there was a
perception that we were very far away from being able to achieve that. We’re still not quite there yet, but it’s hard to imagine,
just the short period of time, how much closer we’ve
gotten to that vision of having a truly active
surveillance system, using big data and the
technologies that have become available since that time. I think in 2003, that
was probably a vision that didn’t comport with the state of the technology at that time. But I think a lot of people at FDA in the Office of Drug Safety, in CDER, Janet Woodcock, others, saw the vision, invested in it and made it come about. It’s a real pleasure to be with you today, to reflect on the past
and speak about the future of what is one of the most
important initiatives at FDA in collaborations ever to
be undertaken by the agency. On a personal level, I’ll
tell you it’s especially meaningful for me to join you on this 10th anniversary workshop, because Sentinel played
such a significant role during my initial tenure at FDA. Although Mark and I weren’t at the agency when Sentinel came into full fruition, we were involved, as he
said, in its early years, and in some of the work
leading up the launch. This is especially gratifying to see how the promise of this
project has been realized. And today plays a very important
role inside the agency. With that in mind, I
want to offer just a few opening thoughts about
how Sentinel has developed into such a successful
data infrastructure. And that today, provides
a truly 21st-century surveillance system for the agency, and how we might build on that
achievement going forward. Sentinel is a powerful
tool for FDA by creating the scientific capacity
to recognize and address safety issues in a timely fashion, by giving FDA a broad and deep access to electronic healthcare data that extends all the way back to the medical record and the patient encounter. And by standing this up in
cost-effective and scalable way, in a scalable system,
FDA’s been able to enhance public health safety in
previously unimaginable ways. Sentinel has allowed us
to develop innovative ways to monitor FDA regulated products and answer numerous
questions simultaneously. And do it in a matter of weeks and months rather than years that it
might have taken in the past. The idea for Sentinel was based
on a very simple principle, one that goes to the heart of
FDA’s science-based mission. We realize that to keep up
with the accelerating pace of scientific discovery,
we needed to develop high-level tools that would
allow us to apply and translate the data and use those products
in a real world information that would benefit patients and consumers. We realize the limitations
from a passive system, the passive way that we gathered a lot of our data and safety issues. And Sentinel was chance
to be much more active in how we collected safety information by tapping into real-world
observational data that could be queried to answer specific scientific questions. Sentinel has also been
a critical piece of our comprehensive response to
patients and caregivers who want and deserve more and
timelier information about the benefits and potentials
risks of the products they use. Sentinel is one of several
successful initiatives that FDA developed to drive
innovation and incorporate emerging technologies
into everything we do to protect and promote the public health. By linking projects like Sentinel and the National Evaluation
System for health Technology, or NEST, which CDRH is
working on with other database initiatives
at our sister agencies, and in organizations
outside the government, we’ve been able to build a
rigorous and robust foundation for producing more useful
knowledge at a fraction of the cost required
from previous efforts. Today, Sentinel has
become one of the largest and most successful uses
of big data in healthcare, a much more efficient
system that can generate much more high-quality evidence
in a shorter period of time. And though it continues to
develop, we already have realized many of the benefits that
come from the ability to use real-world observational
data a tool to monitor medical product safety and
identify further concerns and be able to evaluate them. Those benefits can extend
in the future as well. For example, we should consider
how Sentinel might be used to answer questions about
efficacy, and how FDA might have the tools
and resources to take on these questions in certain
narrow circumstances, where a question around
a product’s efficacy also relates to its safety. One such situation is when it comes to the long-term efficacy of opioid drugs and the long-term prescribing
of these medications. For example, if the chronic
use of an opioid drug is marked by diminishing
efficacy from these medicines, then long-term prescribing
might lead to higher and higher doses being
prescribed over time. That situation could in turn
lead to more dependency, and in some cases, more addiction. This clinical question is
something that we ought to be able to fully understand. Right now, we don’t have good data on the long-term efficacy of opioid drugs, even though they’re
prescribed for these purposes. By leveraging large databases containing electronic health records and
disease specific registries and claims data, we’ve
made significant advances in our understanding of health and disease and safety issues, and
the relationship between health-related factors and outcomes. You’re going to hear
today plenty of statistics on Sentinel, but let me offer a few of the impressive numbers that
frame some of this success. There were more than 223
million members in 2016, and 178 million members
with information from both the medical and pharmacy benefits. There are 43 million people
currently accruing new data. There are 425 million person
years of observation time. And there are 7.2 billion
unique encounters, all contained within this system. And FDA has done 254
analyses using the Sentinel Common Data Model and
reusable modular program since full activation of
the Sentinel System in 2016, including 116 in 2016 and 138 in 2017. None of this would have
been possible without the enormous contributions
of our data partners who’ve helped us build Sentinel. Their sharing of data
and scientific expertise allowed FDA to fulfill its mission, its important public health mission. By joining together,
they’ve enabled the creation of this invaluable
resource for public health. And their continue
collaboration has allowed us to expand and improve the
capabilities of Sentinel and transform it into a
true national resource. The Sentinel System is a model for how FDA can work with diverse elements
of the healthcare industry, researchers, patient groups,
and all to try to accelerate the pace and the impact of
post-market safety activities. This type of real-world
evidence infrastructure is providing useful data
for other activities and other studies by other organizations including IMEDS and the
studies of effectiveness, the FLUVAC study, that could lead to both new knowledge and changes
in effectiveness-related thinking and labeling about products. We’re committed to continuing
this model of collaboration and we need to advance
this infrastructure. It’s become critical to
producing the high-quality evidence needed to guide FDA’s decisions about products it’s
charged with regulating, and help us better inform patients. We know that to meet
our growing evidentiary scientific needs through
access to these data resources, this tool must also continue to evolve. And one of the key questions
you’ll be addressing today is just how does FDA and
its collaborators build on the success of Sentinel to date? The increasing innovation of IT systems and cloud-based technologies,
the use of blockchain and artificial intelligence,
and other types of advanced computing offers
continuing opportunities to advance Sentinel’s development
and improve the system. And improve its data and
analytic capabilities, advances for post-market
safety surveillance as well as the growing
range of research needs. Our PDUFA VI commitments for Sentinel charge us with increasing
the sophistication of the core capabilities,
especially in areas of pregnancy safety, signal detection, analytic tools, and filling
gaps in data sources to improve the sufficiency of the system to address safety questions. We’re also committed to improving
the system’s transparency in how we communicate with industry and the public with what we find. Sentinel System is
leading the way to faster, better, cheaper analyses proving how real-world data is a valuable resource for the whole of healthcare enterprise. And so this workshop provides
an opportunity to discuss the possibilities and
solicit ideas from across the stakeholder community
on how to achieve such aims over the next decade. And I know that several of my
colleagues will be addressing these points in more
detail as this day goes on. So I hope your discussion of these topics is productive today, and I hope we can benefit from your help. I want to again thank all
the partners for their commitment to this effort,
each of you is really key to the success of what we’re doing here. And each of you plays
a vital role in finding new solutions to protecting
and promoting public health. Thanks a lot. (audience clapping) – All right, thank you, Commissioner, it’s great to hear that perspective. Both on the past and the future. And as you heard, Scott’s been involved in this set of activities
for a very long time. I’d like to ask our panelists for the next session on the Sentinel Initiative, what does the next 10 years hold, to come up now while I introduce them. Glad to see everybody made it in, despite the weather this morning. Let me introduce, first, Gerald Dal Pan, the Director of the Office of
Surveillance and Epidemiology at the Center for Drug
Evaluation and Research at FDA. Steven Anderson, the
Director of the Office of Biostatistics and Epidemiology at the Center for Biologics
Evaluation and Research at FDA. And Greg Pappas, Associate Director for National Device
Evaluation at the Center for Devices and
Radiological Health at FDA. We are gonna hear from these three leaders from across the FDA centers
on how the Sentinel System is affecting the regulatory missions and the work that they are able to do. And also with a look ahead. So how they’re benefiting
from the use of Sentinel, some illustrative milestones
of what it’s achieved and contributing to the
mission of each of the centers, and their perspectives
partner what the path forward looks like for Sentinel. So you see this recurrent
theme during the day of understanding where we’ve been, where we are, with an eye towards where the Sentinel Initiative broadly can go. And as you’ll hear,
there’s an increasingly diverse range of activities underway related to the Sentinel Initiative. So Gerald, I’d like to
start with you, please. – So good morning, Mark, and thank you. Thank you, everyone, for coming today. What we heard both from
Mark and Scott a little bit about the history going
back about 15 years now, I think when we recognized
the need to move our post-market drug safety system beyond our MedWatch program,
beyond spontaneous reports. Those reports are still
actually quite important for what we do and account
for probably over half of our post-market safety label changes. But nonetheless, they’re
limited in the scope of the kinds of questions they answer. To answer some of the more
complex and subtle questions, we needed a better system. One that was able to look at
large populations of people, put identified adverse medical outcomes into the context of how a drug is used, and be able to do so in a
timely and efficient way. So while we’ve been using
administrative claims data for decades, the timely and efficient wasn’t really part of that system. Everything we were doing
was reinventing the wheel, so to speak, every time
we needed to take a drive. So in 2006, the Institute
of Medicine issued its report on the future of drug safety, and had many recommendations, which really have been implemented at FDA. And it called for us to get better data. And in early 2007, we
held the first meeting that I can remember
used the word Sentinel, where we just talked in very broad terms about what such a system could be. And we listened to representatives
of academia and industry, as to what the potential might be. The passage of the Amendments Act in 2007 gave us the mandate to create this system. And we’ve come a long way since then. The first part of the effort
was really organizational. How do we envision important
questions such as privacy in informatics contributing
to this effort? And what are those considerations? And so now we have the Sentinel System. We had the Mini-Sentinel Pilot, and then in 2016, so two years ago, we really launched the full-fledged system and have integrated it into our post-market safety operations. We’ve had a lot of training of staff, we have a lot of processes in place, whereby staff are aware of this system, and can use it to answer
the questions they had, questions that they
might not have been able to answer five or 10 years ago. And we’ve been able to take advantage of a system that is timely and efficient. The timeline from
identifying a safety issue to having a usable, actionable result is much shorter now than it used to be. And in doing so, it leads
to cost-effectiveness. This has been enabled by a
distributed database model, with periodic updates to the data, as well as to a suite of reusable programs that the safety team at FDA
works with their colleagues at the Sentinel Operations Center, and other collaborators to parametrize. When we find that we need
an analysis that’s not part of the suite of reusable tools, we can expand that
suite of reusable tools, depending on the different needs we have. So Sentinel is not a static
system, it’s a dynamic system. It’s one that keeps growing and is modernized to meet our needs. And even with these successes, we still have a long way to go. We’ve not really used Sentinel yet to improve signal detection,
we’ve used it more for signal strengthening
and signal analysis. But signal detection is
something we want to do. And you’ll hear Josh Gagne
later today to talk about that. We recognize also that a lot
of what we’ve done so far in Sentinel has been using data
from administrative claims, and we need to go beyond that. We need to see how we can
harness more of the information available in electronic health records. Information that goes beyond what’s in a typical administrative claim. And we’re working to explore
how we can do that now. We’ve also worked hard to make the system publicly available. And I think you’ll hear
about IMEDS later today, Claudia Salinas, from
Eli Lilly will talk about some industry experience with IMEDS. And then we’re also strengthening
international partners, partnerships, our Canadian colleagues have developed CNODES, and you’ll hear what
that stands for later. But it’s essentially a Canadian Sentinel, and they’ve adopted the
Sentinel Common Data Model. And you’ll hear from
Robert Platt later today. So be careful, there are two R. Platts who work in this space. And then what does the next 10 years hold? Well, in 2019, we’re
gonna have to recompete the contract for Sentinel
and toward that end, we recently issued a
request for information. A request for the public
to tell us how we might do things differently, or more efficiently. And specific things we’re interested in are using electronic health
records more efficiently, developing technologies
such as machine learning and artificial intelligence,
natural language processing, and how we can improve access to quick, accurate chart review. And we also have many
commitments regarding Sentinel under the Prescription Drug User Fee Act, and we will be using
those to fill some gaps with signal detection, pregnancy safety, gaps in data sources, and all the like. So finally, one of the things
that the Institute of Medicine said broadly about
post-market drug safety, it’s really best when
post-market safety and pre-market efficacy safety in review
are working together. And Sentinel, so we’ve
made lots of efforts at FDA in that regard very broadly. And as we embark on the initiatives of the 21st Century Cures Act passed about 14 months ago, we’ll be seeing how we
can leverage the Sentinel infrastructure for issues
of real-world evidence. You’ll hear a little bit
today about FDA-Catalyst, and its move in that effort. And so I think in sum, we
have here a lot of progress, on a long-term public health
and regulatory initiative. And there’s still more room to go. I think that’s what we’re going to be talking about today, so thank you. – [Mark] Thank you very
much, Gerald, Steve? – Sure, So I’m gonna sort of divide my comment into two parts. I was gonna talk first really about some of the accomplishments
that we’ve made with the Sentinel Harvard Pilgrim program. I’ll say that in 2017,
it was a productive year for us and that program. We continue to use and integrate Sentinel into our CBER regulatory processes. We’ve dramatically increased
the use of the ARIA tools and done a number of
queries, I think upwards of 40 more queries, which
is a large number for us. And we expect to do more
of that in the future. And we completed 10
protocol-based activities, which are sort of the larger
epidemiological studies, the safeties of products. Another thing we did that was unique was we launched a rapid
surveillance system for the annual influenza vaccine. And it monitors several safety outcomes, along with an effectiveness outcome. Very important, given all that’s going on with influenza this particular year. We’ve hosted a number of trainings, I think we’re up to five
trainings this year, for CBER staff to get them more engaged in the Sentinel program, and to integrate that into their
thinking about using it for their regulatory
needs across the center. Another thing we did was we
stood up a governance board. So we stood up a Sentinel
advisory committee, and that has membership from my office, Office of Biostatistics and Epidemiology, and we manage the program
within the center. Along with the three product
offices, so for vaccines, blood, and then the tissues
and advanced therapies. So we’re actively
engaging the center staff in as stakeholders in the product offices into the Sentinel program. Also engaging the center director, because the center director is part of the decision process
that goes into governance. So we’re very excited about that. So just to close in this
section of comments, we’ve had really, I would say good success with the Sentinel Harvard program. And our lead for Sentinel, Azadeh Shoaibi, will talk a bit more
about the accomplishments during her presentation
later in the morning. So I wanted to move on to
a second set of comments, which really focuses on
our continuing efforts to build a better
Sentinel System for CBER. The Sentinel Harvard
program has been great. It’s addressed some of
our regulatory needs, but you know, in the spirit of
always wanting to do better, we felt we needed to improve Sentinel and continue to develop an even
better surveillance system. At last year’s annual Sentinel workshop, I emphasized sort of two points that were aspirational in nature. And the first one was sort of the mantra of better, faster, cheaper,
which circles again back to this concept that we need to build better and better systems. And then the second point
was furthermore aspirational, and that was about
automation, specifically automation of adverse event reporting. So I’ll talk in a moment about
our efforts in that area. So as far as the progress we’ve made in sort of those two thematic
areas from last year, today I’ll announce that we’ve launched the CBER Sentinel BEST system. So we started this
program in summer of 2017. BEST stands for the
Biologics Effectiveness and Safety system, and
the goal really centers around that claims data,
but it really is focused around electronic health
record data to conduct biological product safety
and effectiveness studies. And also automation of
adverse event reporting. So after an open and
competitive contracting process, we awarded two separate contracts in September 2017 to establish BEST. The contracts were awarded to IQVIA, which is formerly QuintilesIMS. And they partnered with OHDSI,
which is the Observational Health Data Sciences
and Informatics group. So this really again,
establishes this second program, and in addition to Sentinel
Harvard Pilgrim for us. We have the Sentinel BEST program. So playing on words a bit here, we went from wanting a better and better system to wanting the best Sentinel system. So let me just get into what
the contracts really deal with. So contract one, I’ll
cleverly call it contract one, our goals with contract one
are to build a more efficient electronic health record
based data system for CBER, so that we can rapidly conduct queries and epidemiological studies for
blood products and vaccines. Blood products, this is
a very big need for us. And it’s very different
from the realm of drugs. So CBER has some fairly different needs, so this is why we’re moving
some of these new unique systems and different approaches
with our products. So how is what we’re doing
any better and faster? As to getting to better, faster, cheaper. IQVIA, the BEST contractor, provides CBER with an expanded data infrastructure. So I think we’re onboarding about 50 to 70 million patient EHR records. So that’s a large number, considering those types of systems. And then we’re also bringing on board an additional 160 million patients worth of claims data. So again, we’re bringing on these data and these efficiencies. We’re also bringing on the OHDSI and IQVIA data analytic tools that
leverage automation. Which fits in with my
sort of second theme. So staff can use the OHDSI, for instance, the OHDSI Atlas tool and
IQVIA tools from their laptops to design a query from start to finish. That can be a query or
an epidemiological study. They can press a button,
send that program to IQVIA and IQVIA can run that against
the databases we request. So this speeds things up significantly, and it’s transformational, we predict, for the type of work we’re
gonna be doing in the future. It’ll cut things down from even weeks to months to days to weeks. And so we think that’ll bring
incredible efficiencies. It’s still in the early phases, but we’re expecting to gain
great efficiencies from this. Now I wanted to talk a little
bit more about contract two. The primary goal of the second contract is to develop and use innovative methods like data mining, artificial intelligence, machine learning, natural
language processing, and others, to identify adverse events associated with exposure
to blood products. Within EHR data systems,
and then automatically report those adverse events identified into FDA reporting systems
like the FAERS system. The goal eventually is
to expand the network. We’ll start with blood
products and then expand into vaccines and our other products. So essentially, our goal
and aim with contract two is to build the next generation of tools and systems for our
future surveillance work for the next sort of five
to 10 years of Sentinel. And that’s really our major goal and thrust with this new program BEST. And Alan Williams, Dr. Alan
Williams, will be talking a bit more about this in
the last session of the day. As far as progress with
BEST, I’ll just say in the four months that the
program’s been underway, we’ve stood up a query system and run about 100 simple queries. So we’re proud to have
that system up and running in such a short period of
time, and expect to be able to do even more complex
queries in the future. I’ve talked about better,
faster, and cheaper, and then just wanted to
touch on the cheaper aspect, and how are we containing costs. So I think I will say that
everybody on this panel and within the leadership of
FDA are concerned about cost and how do we control costs
for these types of systems? Because they’re very expensive,
and we all recognize that. So with Sentinel BEST,
we’re reducing costs and we’re expecting, at
the end of the contract, to achieve a savings of
about two to three-fold over sort of previous contracts we have. And so, again, efficiencies and then cost-savings are really critical to us. So in summary, we have added this new CBER Sentinel BEST system. In it’s an EHR-based
system, uses new tools and we’ll expect it to
expedite the work we do in safety, surveillance,
and effectiveness. The other thing is accessibility. So we’re hoping, again,
to move these systems out to the public, once we develop them, that they’ll be accessible
to the broader community of academics and industry to use. And then sort of on the final notes, priorities for the next year. We’re continuing to expand
and improve Sentinel, and then advance the CBER Sentinel program to begin and continue to
generate high-quality data for real-world data,
for real-world evidence to support requirements
for the User Fee PDUFA act, as well as 21st Century Cures. And then I just wanted to thank the number of people involved in this work. Azadeh Shoaibi and her CBER Sentinel team, the staff at my office,
from the business team to the medical officers
that do the review, and the center director,
who’s been so supportive of the work that we do, Dr. Peter Marks. And then thanks to our
Harvard Sentinel colleagues and Rich Platt for the work they do. And then welcoming our new
IQVIA and OHDSI colleagues and thank them for the work they do. And then finally, the thankless job that the data partners have, we
thank them for their work, and the academic partners, because they do the yeoman’s work, and
it’s a very important part of this whole program. So it can’t be underestimated. And with that, I’ll stop. – All right, Steve, thanks for covering so much ground, next, Greg. – Thanks so, is that on?
– It should be. – Thanks so much for this opportunity to give you a view of what things look like from the device world. First of all, it’s been
an honor for me to work with Gerald and Bob Ball, and the rest of the Sentinel team over this process. As I’m sure you know, so we’re
going from BEST to NEST now. As I’m sure you know, NEST,
the National Evaluation System for health Technology, health with a little H, so it’s NEST, has taken a very different strategy to use real-world evidence for evidence generation
in the device space. This is not to be contrary. There are three critical
reason I want to emphasize that kind of motivate
the rest of my remarks, critical reasons why we
have taken this strategy. First of all, is the UDI, second is the specific data needs
in the device space, and the third is a strategic effort. So in the UDI, I’m sure you
all know that the drug space, has had a UDI, a unique drug identifier in claims data since the
’70s, we’re only getting to that point right now in devices. We have a law that has the
device uniquely identified on the label, but that’s
not in claims data, it’s not in most medical
records at this point, so we’re still a long ways away
to use those sorts of tools. Second is the specific needs in the device space for evaluation. So a lot of devices, when
you’re trying to evaluate how they’re, you know,
if there’s a problem, it’s not always the
device, it’s the person, or the system, or the hospital environment that affects the use. So you’ve got to be able to
ferret out the difference between a user effect a device effect. That requires a lot of
information that’s typically not available in the systems
we’ve been talking about. The third is a strategic issue, and having to do with the
fact that, as Mark knows, we don’t have a lot of money. And we’ve taken a very different approach building the system with
a vision as we do our work at CDRH, working with regulating devices. Opportunities come up
and we take advantage of those opportunities
to build this system. And I’ll tell you a little
bit more about that. Despite these constraints,
or the needs of devices, we’ve been highly successful
in using real-world evidence. We have over 40 regulatory decisions, either hardcore regulatory decisions, label expansions, post-approval studies, 522s, compliance decisions, have all been supported by real-world evidence. We’ve also gained access
to over 100 million records in this process,
and that encapsulates our two of our strategic goals at CDRH, increasing access to real-world evidence and using real-world evidence. Strategically, we’ve also pursued a number of different
approaches simultaneously. We’re looking at the
broad long-term issue, but I’m gonna focus more
on the short-term issue, the short-term strategies. We’re working with health systems, like the Mercy healthcare
system, working with EHRs, they have a highly-evolved,
very sophisticated EHR system that becomes very appealing. We’ve also worked with Specialty Societies and their registry. So in the device space, we have the unique confluence of needs. Some of our most, our
devices of greatest interest, those that are higher
risk, are implantables that surgeons keep registries on. So we have a resource, and we’ve been, to use the term of art,
we’ve been using coordinated registry network, CRNs,
which is a linked registry. You start with a registry
that provides you with a cohort of people well-exposed, and the device uniquely identified. So what’s an epidemiological study? A cohort, an exposure,
and then an outcome. So we’ve linked the
registries with outcomes data, and in most cases, it’s been claims data. So we have these CRNs,
we’ve got about 16 CRNs now in the making and a very bright future for this part of what
NEST is gonna look like. I’m gonna tell you a little
bit about one of those case studies, the TVT, the
transcatheter valve therapy. So this an innovation that
came up in cardiology. And we worked with the
American College of Cardiology and the Society for Thoracic
Surgery that maintain this registry to identify this cohort. So it’s almost everyone
who has received a valve in this way over the last
many years, 100,000 people. And then we linked it to
claims data for Medicare. So again, the happy confluence
here is that everybody, almost everybody who has
this device, is on Medicare. And then we have the Medicare
claims, so we’ve obviously validated all those end points
and found it very useful. There have been about 20 decisions, regulatory decisions,
supported by the TVT registry. And I’ve recently completed
with a work group, and I see Ted Heise in the audience and some of my colleagues, Jesse Berlin, a work group with industry
and the registries, to evaluate the value created by this approach, this CRN approach. And the 20 decisions cost industry, there are three companies
that make these devices, cost industry about $25
million, so it’s a lot of money. But we costed out what
it would have costed, the counter-factual, if we
did not have the registry. How much would industry have
spent to get these decisions through the FDA, if they
did not have the registry? It’s about $150 million,
so $100 million savings, 500% return on investment. We also looked at time
saved, between months and years saved, so that’s
important both for public health, because people have devices that, that get devices that
wouldn’t get them otherwise, because of the label expansions, and so it’s faster to market. And also faster to market, which is another benefit to industry. Not to be dismissive or to lapse into irrelevance here, in Sentinel, Sentinel still has a very important role in this system that we’re imagining. Again, as a linkage, take the case of orthopedic devices. There are registries that
give us cohorts of people who have been exposed
to orthopedic devices. What we need are then outcomes. Did that person have another
device, another joint replacement, five years,
10 years down the road? There is a perfect role
for Sentinel to provide it with us that outcomes,
this highly-refined, highly-curated system that
already exists for us. So we’re kind of working on that. Administratively, say a
little bit about NEST, we have set up a coordinating center with, we have, with a contract to MDIC. They have hired an executive
director, Rachel Fleurence, who is leading a board that has been recruited. And they are in the process
of establishing data partners, designating demonstration projects. They have a couple
contracts out looking at the sustainability of the system. This is a kind of, all this in devices has kind of been pay-as-you-go
system, so we’re trying to figure out how to keep
that system sustainable. And then also a contract to
do evaluation of our system as we look forward to
the next round of MDUFA. So with that, I’d like to
thank my colleagues here on the table and turn it back to Mark. – Great, thank you all
very much for covering so much ground in a
limited amount of time. I heard a few key themes
highlighting, for one, the unique needs, and
data and methods issues at each of the centers
related to the products that you’re involved in regulating. This recurrent theme in
various forms of better, faster, cheaper, making sure
that the Sentinel Initiative overall keeps up with
developments in the world of electronic data and the
analytics that can be used to interpret it for regulatory purposes. And maybe to create
better, faster, cheaper by using Sentinel as an infrastructure for many other real-world
evidence applications. And some new directions, like perhaps a greater emphasis coming
on signal detection. I heard some discussion about use for effectiveness studies as well. We do have a few minutes for questions from those of you who are here. There are microphones
set up in the audience. Gerald, maybe just
quickly to get this going, you mentioned, I think, an use of Sentinel for an approval, could
you say a little bit more about that as one of these new directions that Sentinel may be taking? – So I think we really talked
more about effectiveness, and we haven’t really broached
the issue of approval yet. But my colleague, David
Martin, and others at FDA have launched something
called FDA-Catalyst. And they have study called IMPACT-AF. And that study there
is looking, I believe, in a randomized way at
the impact of providing certain information to people about atrial fibrillation. And so with the randomization
there, it’s not about one medicine to another,
or a medicine to a placebo, but it’s about randomization
to an intervention. So that’s our first step here. And I think this is gonna
have to go step by step. Clearly, Cures has asked us
to look more at how we can do this, and we’ve got a
variety of committees set up, we’ll be having public meetings and other things to really address this. And IMPACT-AF is done using
the Sentinel infrastructure. So I think Michael
Nguyen will mention that in one of his slides,
and David’s here as well, to answer questions about that. – Great, thank you, And Steve, you highlighted in your remarks the importance of medical record data for many of the CBER regulatory issues. Any early returns on the use of BEST, the augmentation of the
Sentinel Harvard approach to incorporate electronic record data? Are you pretty optimistic there, or is gonna take some very
different kinds of approaches? – So we’re optimistic,
we’re only four months in. So it would be a big task
to have much in the way of progress in the
medical chart review area. I think what we’re looking
at is some nice tools that IQVIA and OHDSI have, which is there is a beta pilot to really automate and semi-automate the
chart review process. So we’re really hopeful about that. And it is very necessary for our products, because for vaccines, if there’s a signal, you really wanna be very certain that that signal is a real signal. And that’s really why we go
to the medical chart reviews. And the same for our
other products as well. So it’s really a critical part. But I can’t offer any more
in the way of progress, at this point, but next year,
we’ll have more for you. – Okay, and Greg, for you, you highlighted how different the both context issues and data issues are for devices. But I’m wondering if you
looked down the road, beyond the coordinated registry networks that you’re building out
today, that have the reliable information on devices
and the context in which they’re used, particularly
implant and major devices, and the long-term outcomes
that come from claims. Over time, it sounds like
we’re gonna be getting more electronic medical
record data incorporated in the Sentinel System overall. There is a path, I’m not
sure exactly where it stands with CMS right now, but
there’s a path for more payer support for incorporating UDI in claims, and potentially a number of health systems are starting to
incorporate UDI information in their electronic records. Do you see a convergence
here down the road? – Well, first of all, I
want to put that question in the context of what
we can’t do currently, which is that a lot of devices are never gonna be in registries. As I said, there’s this
heavy confluence of high-risk devices that tend to be in registries. There are many important
devices that will never be in registries, morcellators,
multiple use devices, duodenoscopes, that are
all pain points for us. So yes, it’s very important to look at these emerging systems. The sticking point’s still gonna be, you know, there’s EHR, and
then there’s digital systems. So EHRs, currently, as we think of them, are never gonna have enough granularity to pick up that information, we’re always gonna have to augment it with something. But yes, we’re definitely looking at a long-term perspective,
a more integrated system. But as I say, since we’re
kind of paying for this as we go and we’re, all
these decisions that we, all this activity has
been, I shouldn’t say this, but kind of on the backs
of the regulatory system. It’s paid for, most of it’s been paid for by industry towards a decision. And we’ve been promoting
it, because it’s better, faster, cheaper for
them, and that’s worked. So that’s kind of what’s
driving our train. – And I would encourage anyone
who does have a question to head up to the microphone,
okay, good, thank you. Can you identify who you are
when you ask your question? – [Laurie] Sure, my name is
Laurie Griggs, I’m running for the United States
Senate in Pennsylvania. I worked for Reagan at the White House. Three questions, but Mark,
you took two of them. – Okay, good, we have only limited time. So good.
– Thank you. So nobody’s got a crystal ball, nobody really can understand
Donald Trump’s tweets, but, except maybe Ivanka,
but what did he mean on Monday night at the State of the Union, when he said we’re gonna
get drug prices down, yes, we’re gonna speed time
from bench to bedside? I was a nurse when Reagan got shot, by the way, for James Brady. So what really, really, really, in this extraordinarily
contentious political world we live in can we do to really finally, and
the EHR, I help hospitals compare Cerner, and Epic, and MEDITECH, and Compucare, and McKesson,
and GE, and Allscripts. So possible to spend millions
and billions each hospital. – So is the question about how the? – [Laurie] What’s Trump
mean when he says you guys are the top executives at the FDA. So you already do know
what you’re trying to do over the next four to eight years. What are you really gonna
do to get drug prices down? Because hospitals are spending a freaking fortune automating? – Okay, so question about drug prices. And that is an issue that FDA
has taken steps to address. I’m not sure how many of them are part of this particular initiative right now. But better, faster, cheaper,
in terms of evidence about to use drugs and
other medical products seems like is relevant to this topic? – So yeah, I agree that the
better, faster, cheaper, more efficient ways to learn
about things more accurately, and in a more timely way,
is just generally part of our public health mission. – [Mark] And any other
comments on the same issues? – [Steven] I would echo the same thing that Gerald says as well. – Yeah, in terms of another
issue of where people are concerned about access and cost now, with the really serious toll
that the flu has had this year, you mentioned, Steve,
using Sentinel in new ways to learn about how safe the vaccine is, how effectively it’s working. Again, it gets at this
better, faster, cheaper issue. Maybe you could say a
little bit more about that. – Yeah, so CBER has been, so we monitor flu vaccine
on an annual basis. And we’ve done this for the last 10 years, looking at Guillain-Barre syndrome, which was really, if
you go all the way back to the swine flu in 1976, that was really a problematic adverse event. So we set up these rapid
surveillance systems, so we could rapidly understand
if there are adverse events. And then implement any sort
of interventions we can. And so with the Sentinel System, we’re trying to set up the same thing, where we can actually just
rapidly get that data. And we have four outcomes
that we’re looking at, as far as safety, and then
I believe we’re looking at hospitalizations as well. – [Mark] Hospitalizations is
more of an effectiveness issue? Preventing a hospitalizations
related to flu? – Yeah, so looking at flu
cases that go into the hospital as an indication of poor
effectiveness of the vaccine. So again, this in its early years, but we expect to kind of
expands these types of programs. Because we feel, obviously, they’re very important for monitoring public health. – And then one more followup
on better, faster, cheaper, and using drugs and other medical
products more effectively. We’ve talked some about the expanding use of electronic medical record information. Greg, I think you alluded to,
and it’s been part of some Sentinel pilot efforts in recent years, use of other types of data. Maybe data coming from
new medical devices, maybe data coming directly from patients. Do any of you wanna
comment on the potential applications or any activities around these types of new data sources? – I’ll take a first crack. So some of the end
points that are critical for the evaluation of
devices, and drugs also, are not very reliable in our
current electronic medical systems, let’s look at the
case of orthopedic devices. One of the key outcomes is
pain and ability to move. And that’s difficult to clinically capture in our current systems. What we’ve been doing
is, through mobile apps, and thank you to Duke-Margolis for an outstanding
product from this summer. We had a workshop this summer
on the mobile app business. Registries have been using
mobile apps to connect with patients to collect
this kind of more granular, more contextual information about pain. And the lesson that we’re learning is one data set by itself,
standalone, is never gonna be probably enough
to answer our data needs. So linkage of multiple
modalities, multiple sources, is the way forward, and
certainly that’s working for us. I know CDER has been a real
pioneer using mobile apps, too. – Right, so we have some
products with mobile apps. We have a recent approval of a product to look at adherence to medicines. And I think that when we
started talking about this stuff 15 years ago, we probably
didn’t envision, you know, your watch having all your
heart rate and stuff on it. And so I think that represents
the next frontier of this. – Yeah, well, I think
a good note to end on, again, thank you all for
covering a lot of ground in what has happened and what’s coming
in the Sentinel Initiative. Really appreciate all of
you joining us, thank you. – Thanks.
(audience clapping) – All right, now I’m gonna
stay up here, just move over to the podium for a second
to introduce the next panel. And I’d like to ask all
of them to come on up. So our next session is
on updates on Sentinel’s network of collaborators,
focusing on this core program, as you heard about with the
Sentinel Harvard initiative. So we’re gonna hear updates
on the state of collaboration, and I think this is gonna
be, this is gonna involve some audience participation,
I believe, Rich? Let me first introduce
our panelists, though. Rich Platt is the Professor
and Chair in the Department of Population Medicine at
Harvard Medical School, and the Executive Director of the Harvard Pilgrim Health Care Institute. He’s the Principal
Investigator of the Sentinel Coordinating Center, and
as you heard, has been with this effort from its
start, its first contract, as Mini-Sentinel at FDA. Marcus Wilson is President of HealthCore, which is a subsidiary of Anthem, and has also been a
long-time leader in Sentinel and in real-world evidence
development more generally. Vinit Nair is the Practice
Lead for Government Relations and Academic Partnerships at CHI, C-H-I, a wholly owned subsidiary of Humana, which has also been long
involved in the Sentinel efforts. And Kenneth Sands, is Chief Epidemiologist and Patient Safety Officer at HCA, a large Sentinel partner,
very much involved in these issues related to
electronic medical record and other new types of data that you heard about in the last session. I am gonna turn this over to Rich, to lead the session and let’s get going. – Okay, great, if there’s a clicker? – There is a clicker.
– I’ll take it, thanks. Okay, our thought was to spend this next period of
time with you describing how Sentinel came to take the
form that it currently has. What it’s like to operate it now, what the future would be like. Well, the future that at least
FDA has asked us to consider. And to do this as a series
of three conversations. So I’m gonna use a few
slides to sort of tee up the conversations, but the
real critical content’s gonna come from my colleagues. And Mark, please jump in at every point. So let me just jump to the bottom line and say, from the very
beginning, we have seen Sentinel as a piece of the very large interest in using real-world
evidence to address questions that are of
importance to the FDA. And let me talk about sort
of six flavors of that. When we say Sentinel as
sort of a short hand, most people think of that
as the Sentinel data set, which is quite a valuable resource. But from the very beginning,
it has been a condition that FDA imposed that it be possible to link back to original medical records. It is also important sometimes
to link to registries, and we’ve put considerable
effort into that. We’re working quite
hard now on the linkage to electronic health records. And then the activities
that have come to the fore quite recently are bring able to link the Sentinel data set to
patient-generated data and to test the use of the
Sentinel infrastructure as a home for clinical trials. So in three parts, we’re gonna talk about how did we get where we’re going. This is a slide from the 2009
Sentinel public workshop, that Janet Woodcock used, describing what FDA’s plans were for Sentinel. I would just say, we have been honored for the past years to be able to work with FDA, but the FDA drives the
development of Sentinel. We build nothing that FDA doesn’t ask for. And we build it to FDA’s specs. So this was our original marching orders, and since there are a lot of words, I’ll sort of highlight some of them. We were instructed to
create a coordinating center that could coordinate the
work of a distributed system to work out what proved to
be quite a sophisticated governance system to create capabilities for secure communications. This says develop epidemiologic
and statistical methods, but what it really means is
adapting state-of-the-art methods to this environment,
and then actually to evaluate the safety
of medical products. So question number one is that
nothing in what I just read said do it with claims, so why
did we claims and not EHRs? And the answer is, well, at that time, EHRs were not very widely distributed in the US healthcare system. And so the only place to go for data on a sufficiently large
population was to claims data. The second is an issue
that is still important, which is, in the medical
care system that we live in, if you wanna be confident
that you know whether or not I have a myocardial infarction, or I’m hit by a bus
while I’m biking to work, you can only really get
that from my claims data, because the hospital where my
electronic health record is may or may not take care of me, when I have my MI or encounter with a bus. And so that issue of coverage and knowing that the absence of an event really means it didn’t happen continues
to be an important one. And then the third important one is, there was a pretty good
history of the use of claims for medication safety assessment. And we wanted to say we
can build something useful to the agency and we have
confidence that we know that at least the basic principles work. One of those examples was this work that four of the current
Sentinel data partners, five of the current Sentinel
data partners did together, looking at the safety of
meningococcal conjugate vaccine. This was a study, so this was, we used distributed methods in five health plans. And for us, it proved proof-of-concept. It also proved that if
you start from scratch, it takes four years to get an answer, it costs $7 million, and
when an FDA reviewer said, this is an assessment of
Guillain-Barre syndrome, can you also, by the way, look at another neurologic complication, the answer was, we’re sorry, we can’t do that. This analysis was built for
Guillain-Barre syndrome. So there were lots of
things to improve on. And much of the development of Sentinel was intended to get at that. Why a distributed database,
and not a single one? Well, originally, there
was no single database that one could go to that was
large enough for FDA’s needs. It is remarkable how large
numbers get small fast, when you’re looking for medications that are not widely used in a population, when you’re looking at rare outcomes. And there were and still
are important barriers to combining separate databases. And I hope my colleagues
will speak to that. There are good, and to my
mind, sufficient reasons that talking about putting
all the data together in a single data set would have meant that we would still be
talking about the protections of that, and the uses of that data set. There are two other
things that are products of having made that decision. The first is that a
distributed system requires that you be extremely clear
about what the analysis will be before you launch the analysis. Which means, among other
things, there is a documentary record and you can’t
really do data dredging, see a result that you
aren’t so happy with, and so change the analysis
after the fact for that. And then, the additional
feature of having separate data systems that contribute
to analysis provides this kind of assurance, sorry. This is the result of an
analysis that we supported, an analysis of FDA’s looking at the safety of oral contraceptives, and
here’s the point estimate result from eight data partners. That single result is the combination of eight separate analyses. And it is, to my mind,
very reassuring to see that in each of eight separate systems, the effect estimate is closely clustered around the final estimate. And that provides a kind of reassurance, because there is so many
decisions that go into creating the data set that is used, that this is a kind of
guarantee that the system is actually answering the
question that you care about. (audience member speaking faintly) These slides will be available. – [Mark] Yeah, we’re gonna
make all the slides available. So don’t worry about
taking slide screenshots. – Yeah, so in our view,
as we were designing this distributed system,
and this was an activity that was jointly
undertaken by our partners and by, with FDA, is we
wanted the system to be able to accommodate many data holders’ data. It was and is important to
minimize the exchange of data. Information exchange
should have no impediments. But the actual movement of individuals’ data should be minimized. The data partners should
have maximum control over the uses of their data. We should be able to incorporate new kinds of data as they became available. But we have found that
it’s critically important for people who understand how
the data was created to be involved in designing studies
and interpreting the data. Studies should be, it should
be possible to implement an analysis identically and
efficiently across the network. And we were anchored in the
idea that we had to have a set of tools that were
really reusable for this. My high-level summary is that FDA made three critical decisions at
the outset that have been really important to Sentinel’s success. The first is to call, to
designate it as a public health program, rather than a research activity. Entirely appropriate, but
it makes a big difference to the fact that this
program operates under FDA’s public health authority, not as research that needs human subjects
approvals, research approvals. Second is, out the outset of the, the understanding with all of
our data partners was every query that FDA, every request
that FDA makes is opt-in, you’re not obligated to
any particular one of them. And I’ll be very interested to hear how you think about that. The lived experience is,
it is extraordinarily unusual for one of our
data partners to say, actually, we’re not
gonna answer that query. And then finally, the
fact that the distributed network really reduces the
need to share confidential and proprietary data
by orders of magnitude. So that’s my tee-up for the
first of our conversations. And I’m hoping that,
Ken, you might lead off in helping these folks think about what goes into deciding
that it’s in the interests of an organization that has a day job to participate in this activity? – Sure, thanks, Rich. And you know, the benefits for HCA being a partner in Sentinel is, as
its most fundamental level, the indirect benefit,
it’s the public health mission that underlies this effort. We see this as a really
key national resource. And to the degree that HCA
can be a partner in that, we’re honored and thrilled to do so. As long as the resource
burden is reasonable, and I think that there
has been deep thinking into the design of Sentinel in a way that that resource burden
has been made minimal, relative to the advantages
of participation. So I think that that has been
the most fundamental thing, has been that indirect benefit. There’s other benefits. When I talk to our
analytic group, they say they clearly have gotten
better, and their skills have increased by virtue of
being a partner in Sentinel. That that participation in data analytics and database design has made us stronger. And so that is a good reason for us to also continue to participate. And I think that the test
for that is that we use the data sets that are set up for Sentinel largely for internal purposes as well, because it’s our best, cleanest
data set for doing queries. So I think that’s an
important benefit as well. There is the cost, in terms
of maintaining the program. And I think it’s great
that we’re talking about the next generation of
Sentinel, because keeping institutional interest
I think will improve, as the capabilities of Sentinel improve. There hasn’t been a lot of down sides. The fact that it’s
opt-in is a key element. But the reality is is
that it’s extremely rare that some reputational
issue or legal issue arises that would make us choose to opt out. – Marcus, Vinit? – Well, great comments, I echo everything Ken’s outlined, that the public health impact was really the original reason we did
it, but there’s other things. You mentioned methods and
being part of the methods development, as well as
learning from the methods as they’re evolving and sort of being on the front edge of that
has been really important. I think the other thing
that I’d really want to say is critical, especially as
we start thinking about, and the panel before talked about sort of the maturation of different types of data. Rich outlined sort of
building on the claims data, and as a starting point
for ultimately getting much more complete view of
the individual patients. Having data is, just having data and then doing analytics on the
data is not good enough. We really have to learn how
to use the data the right way. And learn what it’s good for
and what it’s not good for. And I think that each data
source, and each time we use it, and even different types of claims data, our sources as well as
different types of data we’re adding to that, we
have to really understand deep down what really drove its derivation in the first place. And how did that impact our
interpretation of the data? ‘Cause there’s a lot
of variability in that. And more than anything, we
want to get the right answer, not just an answer quickly, we
want to get the right answer. That’s why better is at the front end of the better, faster, cheaper. You gotta get better right first. And then you can really
focus on faster and cheaper. And I think that was
really very much in mind of the construct of
Sentinel in the first place, is really getting it set up
so we can learn together. And being able to learn
with the folks to my left, as well as many other folks in this room, and that are not here today,
that are partners in all this. I really look at this as being very much a collaborative shared
learning environment. And Rich, you and your
team at Harvard Pilgrim have done a phenomenal job,
in terms of how you pulled that group together and kept
us disciplined in that process. So it’s the learning process
that’s really key for us in this whole process,
’cause we think that, there’s a lot of utility
beyond just safety, and there’s no question, but we think we’ve gotta do it step at a time. – I’ll echo both your
comments, certainly, on this. But I’ll try to add on. And I think it’s no secret
all of the organizations have, like, Rich, you said, day jobs. And that’s not necessary a bad thing. ‘Cause what that does is, it is not, I think even Ken touched on this, this is not a key driver
of our core business. And what that translates
to is that we get to have the luxury to spend time on initiatives and not really get bogged down
by the financials around it. And calculating hours
and dollars and cents. So a case in point is
that we’ve been involved in this initiative, just
like Marcus and team, for over 10 years here. And tell me, in corporate
America, 10 years is a lifetime to maintain the systems, to maintain the teams, in an environment where folks can change. We all find different jobs and move on. Having the same core teams
run the same operation for 10 years is a testament to the fact that of learning what Marcus what saying. The team has learned so
much from Rich and his team, and FDA folks on this team,
that they continuously feel a sense of
professional accomplishment, enlightenment, learning, and that goes, that plays a key role for us as a partner. Now on the side of the company, that’s a significant
investment, like I said, for corporate America, for any
company in corporate America to make for 10 years, to manage
and maintain the same team. So we couldn’t be more
excited to be a part of this. And I’ll also touch upon a
little bit on what Marcus said, which is the importance of data. All of us have claims data, which forms the core functional system. But we are also adding
on other layers of data. Now keep in mind that
health plans do not need all the elements that are
made available for research for the purpose of paying claims. There are only certain
elements that are needed. But all of us have made
significant investments in gathering elements, for
example, like laboratory values. There is probably no need
for us to know exactly what the values are to pay a claim. But the fact that we are
all invested in that, is the reason why we want
to take that databases and use it for research purposes. So these are kind of
the some of the things that go beyond our typical
desire to part of the Sentinel, but we do firmly believe that
this is a national initiative. I would even go so far to say that it’s, to some of us, it’s also
a patriotic initiative, to be a part of this
exciting adventure here. – Mark?
– I think this has been a great discussion, maybe
I could just highlight some of the, what I’ve seen
over the past decade is, just, and the last panel
alluded to this as well, is just how much hard work
has gone into this effort. The notion of using data
from real-world systems, that were collected for other
purposes, paying claims, and now increasingly, lab results and electronic medical record information. And then using it for these
initial safety surveillance activities, and a broader
range of activities now. That takes a lot of work. It sounds really cool to
use for data any analytics, but awful lot of data
cleaning, standardization. You alluded to some of this, Rich. A lot of that work went
into getting to this point and developing methods
that are fit-for-purpose, for these data as well,
as has been alluded to. I wonder if I could ask
those of you on the panel, as you think about this
better, faster, cheaper theme, now that there is a
lot of experience here, a lot of well-established data models, well-established mechanisms
for working together. How do you see that
playing out in the future? Are there lessons that
can be learned to extend Sentinel to some of the new
types of data that we’ve talked about today, or to some
of the new directions ahead? You may be wanting to cover
this later in the panel, so I’m basically just teeing you up for the next set of issues. But I did want to highlight that a lot of capital has been created by all of the work that’s gone in to getting Sentinel to the
point where it is today, thanks to these efforts
by all the data partners working in this shared governance
as part of the network. – So let me show another few slides, which speak directly to that. One thing that I want to
point your attention to, and that is the term data
partners was not chosen lightly. What these people, and their colleagues, and their organizations
are is not data vendors. And we are always mindful that this is a shared activity that
we’re all in together. To your point, Mark, I
say, so just sort the quick where we are, the term good
enough for government work has taken on an entirely
new meaning for us. And that is, at every step along the way, our colleagues at FDA have
said, we intend to use this information to support
regulatory decision-making that will affect the health
of the entire nation. And so it is essential that we understand that the inferences that come out of these
analyses really represent the facts as they were. And that everything you
do has to be auditable in a way that typical health services, or epidemiology research, is not. And so that’s been building
that sort of robustness around this system has been an important, a very important consideration. And to be perfectly
honest, it’s added a lot of costs to the system,
and from my perspective, it’s been money and effort well spent. So Sentinel today is one that is in routine use, as Gerald, and Steve, and then Greg said. It’s gotten some public recognition, as being a high-quality and useful resource for the nation. The data partners look like this, those who have attended
this meeting in the past will notice that, for the first time, CMS data is part of the system. That is where Duke Medicine
is now above the line as the data and scientific partner. That represents the fact
that Duke is now serving as the focal point for a CMS node. And so we now have access to, thanks to the considerable
efforts of our colleagues at FDA, access to a 100% sample of the CMS population. We spent a great deal of
time developing a data model that informaticians say
a simple data model. And it is, the big advantage
of this simple data model is that all of the data
partners work together to agree that they could
faithfully populate this model in a way that
meant the same thing from data partner to data partner. I don’t actually know
what’s inside your systems, but my understanding is that
the term ambulatory visit is not a defined term, but
there are several hundred kinds of encounters
that occur when somebody isn’t hospitalized, but they need to be rolled up into ambulatory visit. And so there was discussions that went on for quite some time. as Vinit pointed out, when it became clear that laboratory results
would be available, we could add a data table. As Hospital Corporation of America joined, we added inpatient pharmacy
data and transfusion data to be able to support
those kinds of studies. So it is proven to be, I’d liken to, let’s say, a Land Rover. It’s, by design, robust and simple, but
can serve as a platform that can take FDA in a lot of directions. FDA made the decision that
it was worth investing quite a lot in curating the
data, so that the data are, essentially, always ready to be queried. Each data partner refreshes its data approximately quarterly. It is usual that in that refresh process, something in the 1,400 data
checks that are performed arises that requires
a conversation between our team at the Operations Center and the teams at each of the sites. And in approximately half
of those data refreshes, it’s necessary to rerun the refresh to deal with that data issue. It emphasizes, again, the importance of having people understand the providence of the data involved in creating it. These data, this snapshot
of what’s in the data set, have been updated since
the commissioner’s numbers, because it shows the Medicare
fee-for-service data, showing that we now have
quite a substantial fraction of the US population
under direct observation. Among people currently accruing data, it’s about 67 million people. So Sentinel has become
more and more robust now because of our ability to add important new data sources. For the past several years, we
have shown a slide like this, which was originally drawn
by our colleagues at FDA saying that Sentinel provides
tools for the toolbox. What this slide doesn’t really convey is how much more sophisticated these tools are now than they were when we first showed this slide. The main cohort identification
and descriptive analysis tool is now in release 5.2.1, and has become a workhorse for
a whole variety of studies. We have been committed to making these tools available to the public. The next of the public trainings in what these tools do is occurring tomorrow and will be available online. But it’s now possible to do whole studies that essentially replicate
the kind of four-year, $7 million study that I showed originally, in a matter of months
using existing tools. When we say months,
most of those months go to exchange between FDA scientists and Sentinel Operations
Systems scientists, to actually design at the specifications to a level of detail that they
can actually be implemented. The increasing
sophistication of these tools is in direct response to
the needs of FDA scientists. I want to point out to
you that among the things that is now being bolted into the tool set is the ability to link
mothers and infants. You could say, why should that
be a difficult thing to do? And all I could tell you is
that one person goes into the hospital, and two
people leave the hospital. And that second person
only keeps the mother’s ID for a very short period
of time, and then has a whole new ID and it’s
real work to understand how to link them and
how well to link them, and how confident we should be. So using these tools, we
now have a suite of tools that can take agency
scientists from doing very simple feasibility queries very quickly to doing complex kinds
of descriptive analyses, to doing full inferential studies. Identify a cohort of new users of Drug A, and a cohort of new users of Drug B and compare their outcomes
in ways that adjust for a whole variety of confounding. I’m not gonna take us through an example, because that’s not the
point of today’s discussion. But it is important to note that along the way, it
has become necessary to build in capabilities to do things like understand how soon after a drug exposure occurs is a person really at risk for the outcome
that FDA cares about? What do you do about
the fact that sometimes people refill their
medication prescription before the original one was exhausted? What do you do about
the fact that sometimes there’s a gap between the end of one dispensing and the new refill. A whole series of
questions, and those are now capabilities that are
built into these tools. And the answer depends on the
question under evaluation. And we’re finding that it’s sort of, with every set of questions, there’s often a new set of capabilities. And to the greatest extent
possible, we update these tools. The commissioner mentioned
that there have been 254 analyses done in the
past couple of years. We don’t have even a back
of the envelope estimate for what would it have cost to do these analyses using
the old kind of system. But the answer, frankly, is more than the agency could afford. And so many of these are analyses that never would have
been done, absent this. I said that an initial boundary condition for the agency was that it be possible to link back to the systems of care from which these records were generated. And we’d say that there are three kinds that we’ll talk about right now. The first is this expert
knowledge that only lives with the people who are in the systems. The second is the ability
to deal with these data quality issues that arise every
time you look at a data set. And the third is the
ability to go back to obtain full-text provider records
when they’re needed. And there are two major
reasons to do that. One is to validate the ability to use coded data. And that coded data could be coded data in the EHRs as well as in claims. And second is the example
that Steve Anderson described. Did this baby have intussusception? And to be able to read that
record and understand that. Now we had teed up
another conversation here, and I just need to test you guys. I realize we want to leave time to have a really in-depth conversation about where we’re going, so do
you want to weigh in here on how it’s going, or
should we keep on moving? (man speaking faintly) Maybe brief comments. – I think it’s going pretty
well, so let’s just keep it. – Well, that was a brief comment. – So where are we headed? And I’ll say again, we only go
where FDA says we should go. But among the things
that are extremely clear is that Sentinel needs to
be able to take advantage of EHR data, which now is
increasingly available. That it’ll be important to use methods that are appropriate for electronic health data. And that includes things like using machine learning to develop algorithms. And to understand how
to use natural language processing for use in
free-text information. Now in order for EHRs to
be useful for Sentinel, we have to recognize
that in a lot of ways, we are now, with regard to
electronic health records, we are almost to the
point where Sentinel was with regard to claims 10 years ago. That is, there are lots of EHRs now, and no EHR system is sufficiently large to be able to support Sentinel’s needs. The second is that, to
the best of our knowledge, there is still no standard
widely implementable method for taking advantage
of the information in a bunch of separate EHR systems. That is distributed machine learning and distributed natural
language processing is a topic that’s gonna
need serious thought. And finally, it is going to be necessary to link EHRs to claims
for many of the things that FDA is interested in. The kind of data that Hospital
Corporation of America has, complete data for inpatient stays for 5% of the US population is
ideal when the exposure and the outcome occur in the hospital. But for many, for the
large majority of questions that FDA has, it’s going to essential to be able to link the
information in the EHRs with the claims data, for the same reason that we described about me and the bus. That the EHR data by itself is ordinarily not gonna make it. One of the things we didn’t talk about, that I think would be helpful,
if you folks will discuss, is what it takes to link data. And in particular, the governance issues surrounding who gives whom what data, and what kinds of assurances do there need to be to make that happen. Because I know that Marcus and Vinit, you’re actively doing that as
part of PCORnet activities. Part of my takeaway is we
will need to figure out ways to do distributed analysis of, figure out that this is
Richard Platt’s EHR here, and Richard Platt’s claims data here. Never actually put them
together, but to do distributed analysis
that uses both of those. And this afternoon, my
colleague, Darren Toh, will talk about some of
the methodologic work that he’s been doing to give
Sentinel that capability. Bob Ball in the Center for
Drugs is leading activities that are aimed directly at
these set of issues around EHRs. We have a work group that is assessing existing best practices
to be able to automate the review of medical records. And Bob and Jeff Brown are
leading the development of a workshop, I believe Duke-Margolis is deeply involved in that, too, to bring the best and
the brightest together to help inform the development
of an agenda for that. So here’s the second part of the reason that it is so advantageous for Sentinel to be built on the
foundation of live systems. And that is, these are
systems that are able to go back, not just to medical records, not just to hospital
systems, but to directly engage their patients, their
members, and providers. So I’m gonna take a minute to talk about the work that FDA-Catalyst is leading. So FDA created the FDA-Catalyst program to deal with situations in which the Sentinel Initiative would engage directly with members or providers. And to do that in a way that takes advantage of this infrastructure. David Martin led the
development of a mobile app that studies safety and effectiveness. Mobile apps are widely appreciated to be
useful adjuncts for this. The goal here is to develop a mobile app that works for Sentinel. That is the special sauce here, is to be able to directly
link the information, that a, use Sentinel to identify
people who have conditions of interest, and recruit
them to provide information, which is then directly
linkable to their information in the Sentinel distributed data set. So that’s the special
sauce here, and why it took considerable time and energy to do that. The first proof-of-principle
has been completed. This was a study where
pregnant women were recruited. The patients were directly involved in developing the design of this app. The goal was to test this with 50 women. So 1,000 women were approached, more than 50 agreed to participate. And the results have been quite positive. It proved that women
were willing to provide information two or three times a week. The information that they
provided was quite useful. And it gives us real reason
to think that it is now possible to use this system to address questions that are of
interest to the agency. With the Center for
Biologics, we’re developing sort of a next set of options around populations that for whom it would be of great interest to collect information. And part of the reason
for that is that so much of what’s of interest is not in claims, and it’s not in the EHR,
but it could be obtained directly from those individuals. And then finally in clinical trials, Gerald mentioned the IMPACT-AFib study, my colleague Chris
Granger will talk about it in more detail this afternoon. What he won’t say is
the smart idea to choose this as the topic for our
randomized trial was his, and he deserves a lot of
credit for bringing this to us. So IMPACT, so you know we’re
dealing with cardiologists when you have an acronym like IMPACT. But this is an 80,000 person,
individually randomized trial involving Marcus’s organization, and Vinit’s organization,
and three others, in which we’re testing
the ability to motivate people who have a condition to be the change agents with their clinicians. So here’s who’s involved. As always, we involve a
patient representative is designing the study. And there is an intervention
aimed at patients and one at their providers,
and the goal is to try to activate patients,
to activate their providers to initiate oral anticoagulation for individuals who have
atrial fibrillation, have other risk factors for stroke, and who are not getting anticoagulants. The important thing to say here
is the entire infrastructure for this study is available in the Sentinel data set. You can identify the
people who are at risk, you know who their providers are, the health plans directly
engage with their members. And then, when the study
is over, all of the results will come from the Sentinel
distributed data set. So this study is, an 80,000 member study is never done for less than
something like $80 million. And this is a study that
is a $5 million study. Finally, I’ll say that
from the very beginning, FDA has said that it’s
important for Sentinel to be a national resource. And I just want to say that
they work that we have said was coming along has
been doing quite well. The IMEDS program now has a
portfolio of research projects, where regulated industry is working with Sentinel data partners. The NIH Collaboratory
Distributed Research Network is making steady progress in providing FDA scientists the ability to
engage these data partners. And the partnership with
PCORnet is quite robust. Okay, so we’re now to
what I think is the most interesting part of the
discussion, which is, so what do you think about
how we’re gonna make this sort of additional leap,
in terms of expanding the uses of the system we have? And what do you see as the big challenges? Marcus, do you wanna lead us off? – Sure, sure, so I think we
have made a lot of progress in 10 years, and just to
kind of go back to one of the topics before,
because within those slides, there’s a lot of important
points, also a lot of questions. And we’ve talked a lot about
electronic health record data, and the understanding that a
couple of things in EHR data. So first of all, we have to master the use of those types of data in combination with claims data and other sources of data, including genomics data, at some point, as an important point. But those of you who
have worked with EHR data understand just how fragmented it is. Also, how inconsistent it is. And so the learning curve
that we had for claims data, and you talked a little bit about, you made the statement that we’re starting with EHR sort of where
we were with claims data. I would say we’re a lot
further behind on EHR data, because we were a lot further ahead, in terms of standardizing
how claims data was captured. And it was a bit more
uniform, even though a lot of the things behind the
data can confound the claims data in non-apparent
ways, which is why we did the distributed model for
Sentinel in the first place. There’s so many things
in EHR data that impact the interpretation of the
data, the meaning of the data, that that’s critical to get right. So there’s a huge learning curve in this. And so I think that’s part of it. The other part you mentioned was the fragmentation of the data. So you mentioned, if Rich
Platt’s data was in the hospital and in a claims system, but it’s also in a primary care practice,
a specialist the street, an ambulatory care center,
there’s all different types of places where your fragments of your data, your EHR data exist. And what we learned 20-plus years ago, when we started HealthCore, we went out on this mission to try
to create that view, so that we could do incremental research and insight development,
and what we learned very quickly is that you have to, there’s one place you have to start. ‘Cause we tried starting with the chart, we tried to do it within
the outpatient setting, we tried in the hospital setting, we tried all different places to
start building that view. And what we kind of came back to was no matter how we worked from
different vantage points, we had to start with the claims data, because of the things you talked about. It serves as the index. It’s imperfect in many ways, but it’s the index, and
the best index we have, even in its incompleteness,
of starting this. And so then you can start
building on top of that. And I think so that’s where the strength that we have in the foundation
with Sentinel today, but we have to aggressively
now begin to look at bringing in the other
data, and learning to use it on top of the data that we have to date. Because there’s so much
value in its creation. So I applaud a lot of
the work that’s going on in that space, and also encourage us to continue to stay
disciplined as we do it, to make sure we’re really getting it right a piece at a time, because
what we’re building is a national resource and
we want to get it right. – So Vinit, will you say something about what was involved in your
engaging in this clinical trial? Because you have tens of thousands of people, of your members. Obviously, sending them a letter saying, this is your health plan,
you might have a problem, is not the kind of thing you do lightly. – So certainly, Rich. And so we’ve been
constantly trying to balance between what I call as thirst
for research versus risk. And sending letters out 50,000 individuals becomes a part of that. Because, keep in mind
that this population, as you all know, it’s not
a sanitized population. They receive a whole lot of mailers from health plans on all sorts of things. And let’s face it, what
happens to most of the mailers that you get from your health plan? It goes to the trash can, unless there’s a check on the outside. So getting people’s attention, in terms of having this as an
incredible important part of the study, is one key challenge. But going beyond that is also, I think I’ll touch upon
something that Marcus said. It is the value of bringing
EHR datas into the space. Data is not perfect. We are still learning, a long way to go in terms of learning that piece. But I’ll tell you one piece,
and I’ll weave this in, too, by saying something we have
said since day one here, which is, and not to
downplay the importance of the technical stuff, and that you have heard these things all the time. Technical is great, governance
is even more important. And where that falls into this is, there is even higher level
of, what do you call, concern and care we have
for the data, for EHR, especially because of
the sensitive elements that the data contains. Now somebody once told
this to me at some meeting, and I want to give that person credit. Claims and EHRs are stories of people with their details
removed, absolutely true. And I think that’s a level of sensitivity as an organization we
keep towards the data. That certainly adds to all
complexities, like Marcus said, another layer of protecting the data. And with all the protections come, it gets much, much harder to use it. So how do we find that right balance of being able to use the data, at the same time protecting the data. And then, like Rich
said, sending these mails out as individuals, and let’s face it, this data is not accurate,
like we all said. So people do get mail
sometimes when they do not have that particular condition. So how do we handle
those kind of concerns? So it has been a case by case basis. So far, so to speak. Our engagement in IMPACT-AF has been, thankfully, great so far. We have not hit any, we have
not done anything stupid. So it’s been going great. But so far, we’ve been handling
it on a case by case basis, making sure that the
information’s accurate. It’s a learning process right now. – So Ken, I know Hospital
Corporation of America has experience doing
cluster randomized trials, which is all the rage in some circles. So could you give a sense of is it a real thing? Can organizations like yours do them? And do they have a future to contribute to comparative effectiveness assessments? – Yes, I think absolutely. We have 180 hospitals that
can participate in trials. We’ve run trials with as many
as 140 hospitals participating simultaneously in a single
cluster randomized trial. And I think it’s a highly efficient way to look at an intervention. Right now, in partnership with Harvard, we’re doing the Swap Out
trial, which is comparing two therapeutics,
mupirocin versus iodophor for MRSA decolonization therapy. And we have randomized, by hospital, and are collecting that data, which, again, there’s resources involved, but I think that the
cluster randomized model will allow us to get a lot of
data at a relatively low cost. We find that our hospitals enjoy participating in these trials. It’s a sense of, it’s extra
work, to a certain degree, but it’s something that
engages our nurses, engages our physicians, and
so we see benefit from that, aside from the fact that we benefit from the information achieved. As I looked at your last set of slides, it’s an ambitious agenda. And we talked earlier about the balance between resources and benefit. It looks like a more ambitious, and a potentially more
resource intensive agenda, but it strikes me as
having a higher degree of overlap with our internal priorities. We’re spending an enormous amount of time thinking now about how we can use our EHR for natural language processing,
for machine learning. We are trying to set up our EHR to be able to have those capabilities. We’re a fair way in that direction, but we’re not all the way there. But it’s an area where
groupthink would seem a really powerful way to
move things more quickly. – Can I add onto that? Just the point about the
cluster randomized trials, and it goes back to designs. There’s only so far, in many cases, retrospective analysis of
data, of any types of data, regardless of how robust, it
can answer certain questions, but in many cases, there’s a
limit in what it can answer. So prospective designs are critical. And I think the prospective
real-world designs, like pragmatic trials
and large simple trials, those types of things for me, ultimately, are gonna be essential. And advancing the methods,
advancing the creation of that environment to do this are
gonna be absolutely critical. And I actually think that
Sentinel, in this case, provides a great foundation for that. Because you can do the analysis, and then quickly move from
that to a prospective design, and on those populations,
because you’ve done a lot of that work already. So being able to do
the prospective designs on the back of something that we did in a retrospective fashion,
if you would, I think, in my mind, is a great
potential opportunity for us. – Great, Mark, do you wanna take over? – There’s just so many ways to go forward in this very strong Sentinel foundation that you all have collectively built. I would like to also see if there are any questions from the audience. Maybe this would be a good time for people to let us know about that. I just wanted to make one comment, too, about coming back to this theme of better, faster, cheaper, and Ken’s
point reminded me that that’s a mantra that the rest of the healthcare system
is facing today as well. And with these efforts on getting towards more real time data,
integrating more sources, it really seems like there’s
a potential for much more alignment with a lot of
other strategic interests of the data partners around demonstrating that care can be delivered
more effectively, targeted to the right patients, more of an era of
precision-oriented medicine that uses evidence-based best practices. The understanding of these data systems, bringing them together,
and having analytics to support that, is not only relevant for the next round of
questions from the FDA, but I think may be increasingly aligned with healthcare systems
and with data partners that themselves are under the gun for demonstrating better
outcomes, lower costs, fewer complications for the
patients that they serve. And I don’t know if you are,
I see some head-nodding, I don’t know if there could
be any more comments on that. But I did have a couple of
questions that I’d like to ask. But let me see if there
are any questions on that, and then I wanna go to the audience. And then I might have a
followup, if there’s time. – And I need two seconds to–
– I know you need to end it. – Then I’ll just do a quick
comment on that, Mark. I think it’s an excellent point. The health plans, in our
world, we spend a lot of time trying to close gaps in care
where we know what to do and trying to get folks to do it. The IMPACT-AF’s a great example of that. At the same time, and we
know what interventions are actually gonna work
best, so in that case, it’s trying to get the providers
and the patients themselves to a point where they’re gonna
achieve the best outcomes. The other part of this
we can’t ignore is that in most cases, and Rob
Califf points this out a lot in his talks out there,
is that most of the time, we don’t know what to do. And so it’s really critical
for us to figure out what to do, especially
on the individual basis, or in very specific
populations, the evidence is not granular enough to really have,
to make the best decisions. So we really have to look at a broad array of evidence development
and building the resource. The resources and the environment to do that are really critical for us. – Yeah, I do see Sentinel
contributing more and more into that overall effort as
this alignment continues. Please, and we’ll have to
keep these questions quick, so much to cover. – [Suanna] Hi, Suanna
Bruinooge with the American Society of Clinical Oncology. I’m wondering, we have a lot
of interest in oral adherence in the cancer therapeutics
field, and I’m just curious because you’re HCA, and
CHI, you clearly have a lot of reach into pharmacy data, provider data, and probably stuff that our members don’t
have, quite frankly. So I just wanted to hear
you talk a little bit about maybe access to pharmacy data. Do you work with PBMs to get that? Or kind of what you’re capability is in that sort of ambulatory space as well. – It’s probably worth
pointing out that Sentinel works with actual dispensing data, whereas EHRs have prescribing data. And there’s a big difference. – And you’re right, and in fact, I would say that, so if you’re looking at all the kinds of different
data, from ambulatory to the inpatient to,
let’s say, a pharmacy. I would certainly argue
that pharmacy is probably the clearest line of
data that we would have. And that’s not because of anything else, but other than the
transactional nature of the data is that the waterfall is
much, much more clearer. So to answer your question,
yes, we do have access to that. And in fact, that’s probably
the clearest data system that exists within our
parameters right now. – Yes, same here.
– Please go ahead. – [George] George Plopper,
Booz Allen Hamilton. I was wondering with respect
to recruiting partners here, how you address the idea
that you’ve got different EHR platforms that you’re
trying to merge here. I’m wondering how agnostic
Sentinel is with respect to that. And any challenges that
you’re encountering marrying those data together. – Yeah, thanks, and that
probably is going to be the last question we have
time for in this session. But Rich, you were saying
this is sort of like claims 10 years ago? – Honestly, that is an
enormously challenging question. The EHR landscape is highly vulcanized. The overlap between, we’ve
said you have to have claims, and the overlap between the EHR systems and the claims systems is only partial. That is if Sentinel now has, looks at 20% of the US population, and
the largest EHR systems look at 20% of the population,
you got way under 20%, which is problematic,
so it’s among the many, many challenges that were. To go from what is obviously a good idea to actually being able
to stand up something that can work effectively to
support the agency’s needs is, on the one hand, the day-to-day
work is not glamorous. My staff made me take out
a whole bunch of slides that I wanted to show
about what the heck goes on to just take a straightforward query and turn it into a straightforward answer. And they said, you can’t do that. But I’m hoping we’ll come
back in a couple of years and say, look, it looks pretty simple. And we’ll have succeeded if it looks pretty simple, even though it isn’t. – Yeah, so that’s certainly ahead. The other questions that
I wanted to ask pertain to this notion that came
up earlier of moving from signal strengthening
and confirmation issues towards signal detection issues. Which it could be done with
all this capital investment, but means going to a different model, running potentially a
lot more associations. That’s something that may be in the future with more of an emphasis on effectiveness determination and randomization. Sentinel’s had to deal
with issues of consent that were not part of the earlier process, but may be a bigger part in the future. We don’t really have time
to talk about those now, but I do wanna turn back to you for, Rich, any final comments. – Well, the final comment
is you see four of us here. There are 400 who are
actually doing this work. Some of you are in the room. So folks, if you’re part
of the Sentinel team, either in the Operations
Center or the data partners, just stand up for a second please. (audience clapping) – [Mark] I have some online, too. – And then, I just want to show you sort of all of the people
who make Sentinel successful. – [Mark] This gets longer every year. – It sure does, so Sentinel only works because there are a lot of
people who have great expertise that they’re willing to
contribute to the system. So thank you.
– All right, well, all of you, Rich, Marcus, Vinit, Ken,
thank you for contributing your expertise, too, we really appreciate you’re joining us for this
panel and all of the work that’s going on, and the
opportunities for the future. Thank you.
(audience clapping) All right, we have covered a
lot of ground this morning. I’m trying to stay just about on time, so we’re gonna take a short break now, and reconvene at 11:15, lots more to come. Look for to our next session. – Thank you.
– Thank you, very good. (people chatting) – Okay, we’re gonna go ahead and get started in our next session. If I could ask you all to
please start taking your seats. Oh, sure, and I’ll ask out
panelists for the next discussion to start making your way
up to the stage, thank you. How’s it going? Okay, welcome back, everybody, my name is Greg Daniel,
I’m Deputy Director in the Duke-Margolis
Center for Health Policy. Let me echo Mark’s earlier
welcome to all of you for joining us in the
Sentinel annual meeting. In this next session, we’re gonna take a bit of a deeper dive into how the FDA is utilizing Sentinel
within its own processes. This panel with start with presentations from both CDER and CBER
Sentinel program leads, and then we’ll pivot to some
reactions from two panelists that are taking quite
of a unique perspective. One is from the FDA’s Office of New Drugs, so that we can hear how that group is incorporating the Sentinel results and processes within its
decision making, as well as a reaction from the private
pharmaceutical industry. So joining on this panel,
we’re gonna start off with two presentations
from FDA, Michael Nguyen is Sentinel Lead at the Center for Drugs Evaluation and Research,
so the drug center. Azadeh Shoaibi is Sentinel
Lead in the Center for Biologics Evaluation and Research. And then following
those two presentations, we’ll have some reactions
from Brian Bradbury, Executive Director of Amgen’s Center for Observational Research. And then also, Mwango Kashoki,
who is the Associate Director for Safety in the Office of
New Drugs at CDER as well. So I’m gonna go ahead and turn things over to Michael, thank you. (speakers speaking faintly) That’s all right. (Mwango speaking faintly) – And they’re like oh my god. – All right, so my name’s Michael Nguyen, I’ll be speaking on
behalf of a lot of people in the Center for Drugs,
who make this all possible. Many, many people from
the Office of New Drugs, as Greg mentioned earlier, the Divisions of Epidemiology at CBER, the Division of Biostatistics,
and many, many more. So we’re gonna take a little bit of a deeper dive into this. So I’m gonna talk about
three things in my talk. We’re gonna talk about
measuring the progress through the lens of
independent assessments. Then we’re gonna review the
system that we’ve created and how we’re using it at CDER. And why having a system matters. So let’s start here. So these are independent assessments done by a highly regarded consulting firm, as part of our meeting PDUFA commitments. There was two done, one that was completed in September 2015, which was the interim
assessment, and the follow on, which was completed in 2017. So why mention these at all? Because they provide a
longitudinal perspective. The 2015 report covered that
both the Mini-Sentinel period, as well as a year into
after Mini-Sentinel. And the 2017 brings into
the much more recent period. And into after Sentinel’s activation. And it was a consistent
rigorous methodology that was done independently,
so you don’t have to believe what I say, but these are words and conclusions drawn from
an independent entity, using methods such as
interviews with FDA staff, Sentinel Operations
Center and data partners. They implemented an
electronic survey for those, cover the gaps where they weren’t covered by the dozens of interviews. They reviewed key operational
metrics that we provided them. Review of publications and reports. And most importantly,
they measured the program against predefined qualitative criteria, in what they call the
Sentinel Maturity Model. So here’s, at the highest level, the interim assessment results. You can see here these are
three of the dimensions that they evaluated,
talent and organization within FDA, mostly the management of it. Governance and process,
so you can see the grade us on a qualitative scale of zero to four with four being the full maturity. And I’ll focus entirely
on the beige bars here, where you can see our
scores, which, in 2015, showed a lot of room for improvement. So to give you more flavor
of just how far we had to go, here were their recommendations
in the end of that report, to give you a sense of
how much we had to go, in terms of going forward. So they were saying back in 2015, by 2017, we need to really aspire to increase user participation in Sentinel. We needed to really broaden
access to the full set of modular programs to our staff. And here, they emphasized
particularly at CDER. We needed to introduce
much more systematic and consistent processes for
triggering the Sentinel System. We needed far greater
transparency than what we had. And then you can see here, they
even stepped down into 2020, and said, you know, in five
years, or four years later, try to aim, try to
aspire to take advantage of the high-level
customization capabilities that were already in Sentinel. And then by 2020, they also said, look, FDA has invested an
enormous amount of resources in creating this system, the goal is to regularly use it by 2020. So there is much to report, I guess, in progress in two years. You can see here, in the 2017 report, just how much has changed with substantial investment in our leadership. You now have statements
where Sentinel has been widely accepted within
FDA as a useful regulatory decision-making tool for safety issues. In fact, I’ll go deeper into that and say, all 16 Office of New Drug
Divisions responsible for regulating prescribed
drugs are involved in Sentinel in one way or the
other, and had been involved. The awareness of its capabilities among the staff has materially increased. It’s been incorporated into
the regulatory process, its results have been
instrumental in deciding a number of critical public health issues. The infrastructure and associate suite of tools are increasingly robust. And the last statement
here is really testament to just how much has changed. The question is now no longer
if Sentinel will be used in regulatory decision-making,
but rather how best to cost-effectively scale
and embed it even further. So even though things
looked fairly dire in 2015, I think we turned it around
in a couple of years in 2017. And so here’s those six
metrics I spoke about. Process, governance, talent
and organization, methods, analytic tools and technology,
and strategy and value. Some of the bottom three weren’t measured with numbers until the second assessment, but you can see now that
the program has reached what they determined to be
a high level of maturity. As we got the results back,
I did probe a little bit, and I said, well, can you give
me an example of a government agency or entity that has
reached full maturity? Thinking in the back of my
head, there’s a lot of respected agencies around, Centers
for Disease Control, it can intervene in Ebola outbreaks, NIH can have breakthroughs in science, and maybe US Special Forces, Seal Team Six, like, has
anyone actually made it? And I’m still waiting
for that response, so. (audience chuckling) But more seriously, no organization worth its salt would accept a marking of four as full maturity. We would always strive better, so I’ve made peace with the results. But I mention these, because again, it’s an intimate assessment,
it’s longitudinal view, and you can go back, and see, and get more detail of just how we did. All right, so let’s review
the system we’ve created and how we’re using it in CDER. I have to go back to FDAAA, because that was the
originating legislation. And there are two key parts to this, which is Section 905 here, that mandated the creation of ARIA or Sentinel. And ARIA stands for
Active Risk Identification and Analysis system, that’s right out of the legislation here,
highlighted in red. And at the same time, FDAAA gave us, the first time the authority to require post-market required studies. And so Section 905 and
Section 901 are balanced on this paragraph right
here, which basically says, we have to consider
the sufficiency of ARIA before issuing a PMR. And if ARIA’s sufficient,
we cannot issue a PMR. And so what is this sufficiency? It’s having adequate data,
having appropriate methods to answer the regulatory
question of interest to lead to a satisfactory
level of precision. There’s all kinds of
epi-terminologies behind this, but this is at the highest level what ARIA sufficiency really is about. And so to that, we’ve created a range of analysis tools here. You can see summary tables, which produces simple code counts, Level 1 analyses, which produce descriptive
analyses on adjusted rates. Level 2 analyses, which
produces adjusted rates with sophisticated confounding control. These are your epi studies, that your used to seeing in JAMA, and
New England Journal, and Epidemiology, and PDS. And then there’s this Level 3, where you take those Level 2,
and you make it sequential. You look at risk assessments over time. So it’s really about the tools, ARIA’s really made up of the tools and the electronic data
in the common data model. So how are we integrating this? The assessment basically
concluded that we’ve done a decent job integrating it. And this is one of the
examples, where ARIA is now being integrated fully
into the approval letter. You can see here the intent to study a safety issue in the ARIA System. Paragraph one is just, it invokes FDAAA. Paragraph two says,
look, we are now studying septal perforation, cataracts,
and glaucoma in ARIA. And number three is a
paragraph about transparency, and when are you gonna see
the analysis parameters? When are you gonna see the study design, and then when are you
gonna see the results? All built into the approval
letter, now all public. So what have we done to date? Rich has talked about how we’ve
done 254 analyses to date. Here it is over time, just by quarter. The bottom boxes in blue
are the summary tables. The purples are the Level 1s,
and the tops are the Level 2s. And so there’s two flavors of
analyses here that you see. First are the more simple things that you can end in a Level 1. And the second thing is that the package, the bundle of analyses
required that include feasibility analysis as well
as the inferential analyses that fully answer a safety issue. So it’s not a one-to-one correlation, it’s not like we’ve
evaluated 254 safety issues. It’s the bundle to answer,
required to answer it. All right, so we’ve become
much more transparent. And we’re trying to get more transparent. We’ve launched a new page called How ARIA Analyses Have Been Used by FDA. And I just want to make
a couple of points here. You can see just the wide variety of ways in which we’re using it. Everything from drug safety
label changes, which you would hope, here’s the new
warnings and precautions, helping us respond to citizen’s petitions, presentations, and advisory
committee meetings, helping us just do background
disease surveillance to inform product development,
of novel RSV therapeutics, et cetera, et cetera. And the other thing to note
about this page though is that in no one of these did Sentinel
play the defining role. That was never the goal of the system. We’ve said since the very beginning, 2009, that Sentinel was not meant
to replace any system, but to augment the strengths
that FDA already had. And so the bar here is that,
was Sentinel data considered as part of the regulatory decision? Did it make a meaningful contribution? And that’s what gets you here. The other point to see in this list is that there’s a natural delay. So even though we have 254 analyses done, it takes time for the analysis to be done, it takes time for FDA to
incorporate in the decision-making, and then it takes time for us
to clear and get these online. So expect more to come. All right, so let’s take a
look at ARIA sufficiency then. If you look at all the
serious safety issues evaluated in ARIA, this is
by sufficiency and year. And this, I should caveat and
say this is preliminary data. We’ve had 89 safety issues go through the system since 2016 to 2017. You can see the relative, in blue, which were deemed not sufficient to study, and which were deemed sufficient to study. And you can see the relative proportions. And overall, 49%
sufficiency rate is not bad. I should also say that these
are questions where FDA thought it was possible to do in
an observational design. So if we had decided
already up front that only a clinical trial could
answer the question, that’s not counted here. But for those subset of
questions where we thought this could be done in
an observational study, getting it at about half is about right. If you think about the FDAAA’s
originating legislation, where it said FDA can issue
and force manufacturers to do a post-market study,
but at the same time, we have to strengthen our abilities. And so that balancing on the fulcrum of sufficiency, 49%’s not bad. All right, so here’s the same set of 82, but now divided pre-approval
and post-approval. And you can see here, pre-approval, the sufficiency rate’s less. And I think that’s to be expected, whereas the post-approval, it’s higher. And why is that? Some of the reason here is that
size of the safety database is much smaller, pre-approval. And so the questions are much tougher. And the questions are much wider ranging. They’re primarily driven
by the therapeutic mechanism of action, and
some small imbalances in the clinical trials that may
or may not be due to chance. And so you get this lengthy
list of possible questions. Whereas post-approval, it’s
usually driven by single-signal. And it’s well-defined, it comes out of, often, observational
data, and so following up with an observational study makes sense. So why do we keep all these metrics? Because it helps us develop the system. These are the reasons why we deemed ARIA to be insufficient in order. And you can see the first four are all because of lack of data,
whereas only the last is because we didn’t
have a lack of a tool. So the fixing a tool is easy, it’s writing a fast code and getting it to work. Fixing the data is much larger problem. And Bob Ball will be leading a panel on that later this afternoon on it, so I don’t want to steal his thunder. All right, so as I said before, keeping track of this informs development, and here’s just a couple
of select examples. We had questions where, look, there was no mother infant linkage
and no analysis tool. And so we created that linkage, and it’s anticipated, and
it’s coming this October. There’s other questions where, look, even with the size of
the Sentinel database, it still wasn’t enough,
because it was all under 65, or primarily under 65. And it’s the elderly
population that uses the drugs. So we integrated Medicare
data, and that just went live. Incomplete cause of death, we’re trying to facilitate linkage with the NDI. And then performance of claims for outcome validation, as I said before. And there’s a range of projects there, and those are ongoing. So keeping the statistics
help us make us better. The transparency and access are important, one of our main
constituencies is industry. So how do we make sure
that this is transparent? And CDER notifies all sponsors, and is committed to doing that. And we post all Level
2 inferential analyses. And so we will notify, for
at least the pre-licensure, we will notify you within
the approval letter. Going forward in
post-licensure, we will send a custom letter to you
the moment that we have a final study design in place. We will list those L2 analyses on an ongoing assessment page,
which is live and you can see. We post, when they’re available, all the analysis
parameters, all the results. And so if you wish to, you can then take that analytic package literally and go to IMEDS and run in the system. And change the parameters that
you think need to be changed. So we will continue to support IMEDS, to enable, as I said,
Sentinel infrastructure to external entities, such as industry. Additionally, what I learned this year is that the single most
important thing we could do to help open up Sentinel was to create a publicly available synthetic data set. And that’s what we’re doing. And fully worked examples so
that those in the community, especially academia, can
see exactly how this works, and deconstruct it and pull it apart. We are also doing public
training on tools and methods. CBER mentioned a whole
bunch of public trainings, we’re doing one tomorrow. So we are committed to this. So lastly, just I’ll have a last section, why having a Sentinel System matters. So I want to take you back, to what it’s like to
launch a multisite study. And the meningococcal vaccine study was a great example that Rich brought up. But here’s what it takes. You have to identify and
prioritize the issue. You have to find the
money within your agency and allocate the resources, and make an organizational investment decision. Then you have to do the contracting. You have to issue the RFP, you have to then wrestle
with the contracting office to show that the choice you made, which may not be the lowest cost bidder, is the rational decision
and the best for the FDA. You award it, and then the
work starts, after all of that. You have the study design,
you do your protocol, you do the statistical analysis plan. Then you have to extract
and QC all of that data from however many data partners
contribute to that study. Then you custom write the code, and you write it, and you QC that code. And then you begin the
feasibility analysis. And when you do the feasibility analysis, it often changes he code that you wrote, it often changes the
study design you wrote. And only after that do you finally do the inferential analysis. It’s a huge lift, and when
you do it as a one-off, none of this gets reused. None of this gets reused. So let’s talk about doing a multisite, multi study within a system now. So you take away several of these boxes. And what you have then is faster startup time,
you eliminate all of that contracting administrative burdens. All of the data quality
cleaning and things like that. And you reduce the time to completion. And this is why it’s
important to have a system. And so then you can scale
this to many studies and you get much, much lower costs, due to economies of scale. And what one of the things
that I’ll harken back to is several years ago, Janet
Woodcock made a statement at one of these meetings, where she said, look, we can’t attract
top notch epidemiologists if you don’t have a top
notch epidemiology program, with a top notch epidemiology system. We’re getting to that point. And we can attract much better now. The other thing to think about is this is no different than
saying to a physician, for the physicians in the audience, look, we’ll take away all your
administrative burdens, you just focus on taking
care of the patient. What you see back here,
in these four boxes is this is what epidemiologists like to do. This is their bread and butter. This is their expertise. It’s not the proceeding four boxes. And so it’s a much more effective system. All right, and so once you
have a system in place, it makes large-scale endeavors far more cost-effective and faster. Any structural improvements that you make capitalize on the existing system. So if we integrate Medicare data, it’s not just integrating
a new safety course, it’s integrating it into an ecosystem that has summary tables,
L1s, L2s, L3s in place. If you develop signal detection methods, that’s being integrated into an ecosystem that can already to signal investigation and signal evaluation. And so additionally, a
unified common platform that enables collaboration
with other systems. And this is probably one of
the most promising things, and I’m so happy that
Robert Platt is coming, is here to talk later in the afternoon. Is when another system adopts the Sentinel Common Data Model, they
automatically get the tools. And now we create a possibility,
at least in this case, of a North American system that draws upon the data out of Canada. And for the Canadians, they can help draw upon the data in the US. And so it’s a symbiotic relationship. And there’s a lot of possibility there. Same thing with accessing PCORnet, which is the single
largest repository of EHRs. So the fact that they
have a very similar system means growing towards them is much faster. The other thing is you
can also expand things beyond post-market safety. And so at FDA, we talk
about this FDA-Catalyst, and this is run by the
Office of Medical Policy. And we’ve heard already
about the mobile app, and we’ve heard already about IMPACT-AF, but the two things about
this is that you have the claims data, you have the ability to now collect patient-generated data. And now you have the ability to randomize. And you start to create the
conditions that are ripe for doing distributed pragmatic trials. We’re not there yet,
but we’re almost there. All right, so this is
my second-to-last slide. It also makes smaller scale endeavors much more likely, much more frequent. It allows us to address a wide variety of agency needs, without the need to martial organizational
support every time. For example, medication errors,
risk mitigation studies, biosimilars, genetic
drugs, our mandate is vast. And I’ll just pick medication errors, where had Jill Wyath, who was leading a methotrexate analysis,
had she had to write a five-page proposal, get it
approved by her supervisor, get it approved by her
supervisor’s supervisor, take it to a committee of other offices that then she has to defend. And then if her gets approval there, if it doesn’t align with
the budget timeline, then she has to wait a year
for next year’s budget. All of that, and that’s not even talking about the contracting yet. Several years later, that’s
when her idea becomes reality. And here, she was able
to walk into our office and say, look, we wanna try this out, can we turn it around in six months? So it makes for a much
more innovative system, it makes for a much more positive
work environment for FDA. So in summary, Sentinel
is a mature semi-automated safety surveillance
system that is distinctive in its use of parametrized, reusable analytic tools, combined with a continuously quality-checked
multisite database. It provides a single common platform to help meet FDA’s needs in evaluating the safety of medical
products after approval. And the challenge for FDA, and Sentinel, and its collaborators, is to build upon these successes to date by
continuing to advance the core deliverable, which is post-market safety, but at the same time, expanding into all these other areas of need. Thanks a lot.
– Great, thanks. Thanks, Michael, so we’ll
turn it over Azadeh. – Good morning, my name is Azadeh Shoaibi. I’m the lead of the Sentinel program at the Center for Biologics
Evaluation and Research. And I’m going to, I’m going to present a key achievements of the CBER Sentinel program in 2017. The products that are regulated by CBER are diverse and they are called biologics. We have vaccines, blood
components, and blood products, human tissues and cellular
products, gene therapies, and xenotransplantation products. So the activities under Sentinel System, or Sentinel
Initiative basically look at the post-market safety of this
diverse group of products. In the past year, as you
have heard from others, CBER has focused on a major priority which was presented in last year’s Sentinel workshop, and that
is building a semi-automated national biovigilance program. We have made some progress in this area, and I will present these achievements later in my presentation. In addition to this new
area of development, CBER has continued to
integrate Sentinel activities to generate empirical evidence to support safety and also a regulatory
decision-making for biologics. We have also prioritized
efforts to improve transparency and governance for the
CBER Sentinel program. These include establishment
of the CBER Sentinel Advisory Committee, which
is composed of members from all of our product
offices, as well as Office of Biostatics and Epidemiology. And in addition to the engagement, in addition to this CBER
Sentinel Advisory Committee, we have also engaged the product offices and set Sentinel priorities
for the product offices, so that our work with
them will be a more fluent and there will be more
engagement on their part, and we have also engaged them in different Sentinel trainings that have been conducted
throughout the year. Last year, we also, in
the same workshop here, we announced another priority
for the CBER Sentinel program. And that is, as you heard from others, to make the program
better, faster, cheaper. And some of the areas of
improvement that we thought would be important to
take on to make this goal come to fruition is increasing capacity, decreasing costs, reducing the data lag, and also accessing electronic
health record data sources. And throughout my presentation,
I will provide examples, in terms of activities that CBER undertook in the past year to realize this priority. As you are aware, Harvard Pilgrim Health Care Institute is one. this is moving on its own. – [Michael] Can you go back one? – [Azadeh] All right, Harvard Pilgrim is one of the contractors and our partner for the CBER Sentinel program. So now I would like to present
some of the achievements that Harvard Pilgrim group. I’m not in control. That Harvard Pilgrim group
has helped us to achieve. So we have increased
the use of query tools by about 63% in the previous year. We have also had, so
the use of query tools would help us to reduce the
time and the cost that is associated with some of the
protocol-based activities. So whenever possible, we
have tried to use the tools in lieu of the protocol-based activities. However, we have still
had a number of ongoing protocol-based activities last year. And 10 of them were completed. HP group also conducted
two training sessions for the CBER staff with regards
to the Sentinel program. Here is a breakdown of
the types of queries that we conducted last year. We had a total of 41, 37
of them were descriptive, and four of them were analytic. Now these, the results of these queries are used to inform our
regulatory decision-makings, and to address safety
issues related to biologics. For example, we looked at the geographic incidence of babesiosis, which
is a vector-borne disease. And we used that analysis
to inform policies related to blood donation
and blood testing. We also looked at adverse events related to transfusion of leukoreduced blood and a rate of transfusion
in high-risk populations. And we used the results
for risk assessment and blood safety purposes. I mentioned that 10
protocol-based activities were completed, most of
these activities focused on building the infrastructure
and the developing methods, such as safety of vaccines in pregnancy, or vaccine effectiveness feasibility. But we also had a few
projects that looked at the potential association between specific medical products and
specific adverse events. Biologics, as you may
have heard previously, have certain characteristics. And because of these characteristics, the surveillance program for biologics also requires certain capabilities. So in order to expand and enhance
the CBER Sentinel program, and to also customize the
components of this program, for the safety surveillance of biologics, in 2017, CBER awarded two new one-year contracts to IQVIA Institute and OHDSI working together. As Steve Anderson mentioned,
the name of this new program is Biologics Effectiveness
and Safety initiative. IQVIA Institute is formerly
known as QuintilesIMS, and it’s a private company. OHDSI stands for Observational
Health Data Sciences and Informatics, and it consists of a network of interdisciplinary scientists with an open source platform. The slides are moving faster than I am. So is that a sign? (audience chuckling) So the two contracts that
we awarded last year. The objective of what we call contract one is to develop additional
surveillance capabilities as specifically required for biologics. And access to EHR data sources is a major component of this requirement. And the objective of
what we call contract two is to utilize innovative methods,
such as machine learning, natural language processing,
and artificial intelligence to mine unstructured data
such as physicians notes or extensive pathology results, et cetera, in EHR data sources. So here, I’m showing the current model of the CBER Sentinel program. We have two major contracts. One is with Harvard Pilgrim
Health Care Institute, and the other one with
IQVIA-OHDSI collaborative under the BEST initiative. So now I would like to focus on the achievements of the BEST initiative in the last three months of 2017. This is moving.
– They’re trying to fix it in the back, but if we could. – [Azadeh] Okay, so it’s
very distracting to me. So I was going to say that we, I’m going to talk about the achievements of the BEST initiative in the
last three months of 2017, after the two contracts were awarded. So the BEST initiative has access to EHR data sources as
well as claims data. IQVIA provides about 160
million patient records from claims sources, 83
million patient records from billing data of hospitals,
and 44 million patient records in the form of EHR
from ambulatory care settings. We have other data partners,
including Columbia University, Stanford University,
University of Indiana, and Regenstrief Institute. And these data partners provide about 24 million EHR patient records. And all of these data
partners transform their data into the OMOP Common Data Model. And they refresh their data quarterly, so we have access to data that
are about three months old. The BEST initiative uses the OHDSI tools. And these tools are
freely available online for FDA’s staff or any
other scientist to use them. And the tools do a range of activities, including generating protocols,
validating R packages, they generate the validated R package, figures, tables, and
also draft manuscripts. So here, I’m showing a screenshot of one of the tools called ATLAS. ATLAS does a number of activities, such as defining cohorts,
generating patient profiles, conducting effect estimations,
and other activities. So because these tools
are freely available to FDA’s staff, they can easily, with some spending a short
amount of time at their desk, design their own queries
or epidemiologic studies. Here, I’m showing an
automatically generated protocol by the ATLAS tool. When you specify the parameters that are needed for a query or a study, then your protocol is
automatically generated. So this automation ensures
that all parameters are documented, it improves transparency for the exact specifications executed, and it also eases reproducibility. Similarly, one can
select the R code option to look at the validated R package that is generated after one
specifies the parameters for a study or a query, and the R code can be modified manually if needed. The tool also generates
figures and tables. We can use this feature for characterizing the available databases
or a specific cohort. Some of the specific technical
work that we have done within the BEST initiative
in the last three months of 2017 include building a
library consisting of multiple coding systems for blood
exposure identification in both claims and EHR databases. This library has about 4,000 codes and we have queried all
of these codes within all of our data partners. We have also incorporated
ISBT-128 coding system into the OMOP Common Data Model. And that coding system
has about 14,000 codes for blood and other products. Now ISBT stands for Information Standard for Blood and Transplant,
and it is a global standard for identification, labeling,
and information transfer of medical products of human origin. Our experience with
claims data has shown that claims data do not have sufficient amount of data and granular enough
data for surveillance of blood components,
blood products, tissues, and other advanced therapies. So with the availability of ISBT-238 coding system, we have seen and studied
this coding system and we have realized
that this coding system, which is a barcoding system, using blood banks and hospitals, has very granular data for our products. And that’s one of the
reasons that we incorporated ISBT into the OMOP Common Data Model as one of the first steps we took. And also, the BEST initiative conducted three training sessions in
the last three months of 2017. So the main accomplishments
of the BEST initiative is the expansion and
enhancement of the CBER Sentinel program, with respect to data
and infrastructure so far. So we have added EHR data sources. We are using a flexible and
expandable common data model. We have reduced the
data lag significantly. And we have access to query tools that are available to everyone. And also, we are using
innovative methods to identify and report blood transfusion
related adverse events. And this program has also created a significant cost reduction. So we awarded two new contracts last year to
the BEST initiative. But both of them are, both of them are one-year contracts. So in order to continue the activities of the BEST initiative, CBER initiated a market research for two potential new BEST contracts in fiscal year 2018. And here, I’m showing the, it’s moving. (chuckles) – [Man] Thinking that
perhaps it’s double-clicking, it might be sensitive, so maybe just – Okay.
(muffled speech) So here, I’m showing the website address for the request for information. The two requests for information that were released in January of 2018. In conjunction with these two
RFIs that have been released, CBER is hosting an industry day. It’s a public meeting,
we call it industry day, on February 12th, which
is this coming Monday at the White Oak Campus. And this is the
registration site for that. And the purpose of the
industry day is to provide an opportunity for potential
vendors to ask questions about these two RFIs
that have been released. So in summary, CBER has made
significant improvements to the biologic surveillance system, by addition of EHR data
sources, advanced tools, reducing the data lag,
and using a flexible and expandable common data model. We continue to integrate
surveillance activities of Sentinel into the
regulatory decision-making. And we have, by managing
our existing resources, we have made the surveillance activities and the program more sustainable, cost-effective, and efficient. And these are steps taken towards building the semi-automated national
biovigilance system that I mentioned earlier. And I would like to acknowledge
a large group of people and many entities who
contribute to ongoing work of Sentinel activities,
including more importantly, my team, CBER Sentinel Central Team. – Okay, great, thank you very much. Apologize for the technical difficulties. We are running quite a
bit over on this panel, so I’ll quickly turn to Brian from Amgen, to get some of your reactions, thank you. – Okay, so I’ll start by saying I’m glad I don’t have slides. So thank you to the
organizers for the opportunity to be here today, to Drs.
Nguyen, Shoaibi, and Kashoki. It’s been very helpful to
talk with you in advance of the meeting and to hear your comments here today about Sentinel’s achievements, and the new initiatives that
the agency is embarking upon. The FDA Sentinel System
represents a significant advancement in our
ability to systematically conduct post-marketing drug
safety research and analyses. As a taxpayer, a husband, and a father, I personally believe
that having the ability to rigorously monitor the
safety of medicines before and after they receive
regulatory approval is critical. And as a pharmacoepidemiologist,
I believe that the use of healthcare databases
combined with an appropriate study design and robust analytic methods can yield important and
reliable information about the use and safety of
medicines in many circumstances. So I work in the Center for
Observational Research at Amgen, and I lead our data and analytics center. We sit in the development organization and are made up of
epidemiologists, biostatisticians, statistical programmers,
and data scientists. Our group is charged with
developing and advancing the use of real-world
evidence to support medicines throughout the drug development lifecycle. An important element of what we do is to provide robust
epidemiologic evidence regarding the effectiveness
and safety of our medicines. Nearly three years ago,
our team downloaded all of the Sentinel modular programs into our real-world data platform,
which has been harmonized to the OMOP Common Data Model. We built an ETL process to convert data from the OMOP model to
the Mini-Sentinel model so that we could run
Sentinel modular programs without affecting their
underlying integrity. We decided to integrate
the modular programs, so that we’d become more familiar with and better understand
what they were doing, but also the leverage the
extensive methodologic work that had been incorporated
into their development. Today, we use these programs to address a wide array of questions,
relating to our medicines, both in development and in
the post-marketing setting. In the process of doing
so, we have learned a great deal and have
a better understanding of the strengths and potential
limitations of this system, and have taken the opportunity
to make adjustments so that these programs
could be better address the biologic medicines which we produce. Some of the comments I will give today will reflect some of that learning. So since being invited to
participate in this meeting, I’ve taken the opportunity
to interview colleagues and stakeholders across
the healthcare ecosystem. Epidemiologists in safety organizations, academic pharmacoepidemiologists,
government employees, all with the goal of gathering feedback, so that my comments here today reflect a more representative
view than simply my own. I’ve synthesized the
feedback that I have received into four major domains. Reflections on the foundational
work to build Sentinel, a need for greater education
about Sentinel processes, a desire for greater awareness
and potential engagement when queries are executed,
and opportunities for stakeholders, including
industry, to collaborate with the agency in the evolution
of the Sentinel System. So first, all stakeholders
that I spoke with recognize that the development
of the Sentinel System has been a significant achievement. The FDA took the mandate
in FDAAA, and designed, built, and operationalized a post-market safety surveillance system that can be reliably used to address
post-marketing safety questions. Throughout the process, the agency engaged thought leaders in
pharmacoepidemiologic research, and built a system based on
sound methodologic principles. And engaged data partners
across the United States, who have expert understanding
of their data systems. Now that system has over
180 million lives covered, which is quite a sample size, even though it does reduce in certain circumstances. The system continues to evolve and expand, with a focus on improving data quality, not just getting more data. There is also increasing
awareness that the system is being used to answer a
wide range of questions, as evidenced by the many
ARIA reports that have been released on the Sentinel website. And now, by being
integrated into regulatory decision-making, as Dr.
Nguyen showed you earlier, as well as label changes,
an important advancement. Second, there’s an opportunity
to increase awareness about the extensive processes
that go into executing safety analyses within the system. It may help to educate
stakeholders about the time it takes to develop
analytic specifications to ensure that they are done
to meet the needs of the study, to work with the data partners in all that’s entailed in
doing so, and to execute these analyses and synthesize results. Sponsors who will work
with IMEDS will learn this, but there may be opportunities
to broaden awareness, about what is required
to do these studies. Those of us who do it know
how it complicated it is. But I think more broadly, it is important to federate that knowledge. There is a desire, thirdly,
to have greater awareness by industry when queries focused on a sponsor’s medicines are being executed. And if possible, find ways
for industry to provide input into how these analyses are conducted. It may not be permissible
within the legislation, but given the significant
expertise that sits within the walls of
biopharmaceutical companies about our medicines, are
there ways that we can play a more active role in understanding
the safety of medicines. And finally, if we believe
that the Sentinel System should be a learning system,
is there an opportunity to engage stakeholders who have worked with the Sentinel System to help engage its suitability for addressing
a wide range of safety questions, including those
related to biologic medicines. For example, our team, when
we integrated the programs, we realized very early on that it was not well-designed for biologic medicines. So we had to tailor it to
meet those medicines’ needs. I think those learnings can
be helpful to the agency. But additionally, we know
in the methods communities that there are a lot of challenging issues in doing post-marketing
pharmacoepidemiologic research, like informative censoring
and treatment discontinuation. All of which can impact the reliability and validity of these results. Those of us that are
challenged by these problems are developing analytic solutions, and we think it can be helpful
to engage with the agency, and others, in the
evolution of this system. So I thank you for the
opportunity to speak today, and hopefully my comments
have been helpful. – Okay, great, thank you very much. Mwango, sorry, I’m
gonna have to ask you to maybe err on the shorter
side, or the quicker side, if possible, so that we can try to finish. ‘Cause we’re way over.
– I totally understand. – Thank you.
– I’ve been trying to edit as we’ve been going along. So hello is gonna be my first opening. And I think that just want to say that my comments summarize an OND experience, but the Office of New Drugs, OND, is fairly large and diverse. So there are unique experiences
across the different review divisions, as well as
commonalities in the experience. But I’m just giving you the
high-level summary of these. And so in terms of integration of Sentinel into the drug review process,
and into the specific processes for the Office of New Drugs, as would be expected with
any new scientific resource that’s intended to inform
regulatory decisions, it has been a period of
learning and adoption of Sentinel and ARIA by the review staff within that Office of New Drugs. As Sentinel’s capabilities have matured, so have our staff’s understanding and use of these capabilities. And to the point now, as hopefully you got from Michael’s slides, that
Sentinel is now considered an important source for safety information about approved and marketed drugs. To date, within the
Center for Drugs, or CDER, the primary users of the Sentinel System are the staff in the
Office of Surveillance and Epidemiology, or OSE,
and the Office of New Drugs, with a lot of very
necessary and helpful input from the biostatisticians
in the Office of Biometrics. The OND and OSE staff collaborate
very closely on safety issues, both in the pre
and post-market spaces. And engage in a lot of discussions about the potential use of
ARIA and ARIA queries, as well as the sufficiency of ARIA when we’re making those
very specific determinations as to whether we should require
post-market safety study or trial of a sponsor to
further evaluate an issue. Through these collaborations,
we’ve developed and are continuing to refine our processes for staff discussions with
the intent of ensuring that they happen in a systematic way. And particularly in the context of review of new applications, that they
happen in a very timely way so that we can meet our deadlines. Additionally, as we’ve
gone through our learning, we’ve summarized the
specific considerations that FDA makes when we’re determining ARIA sufficiency to
evaluate a safety issue, and therefore, whether a post-marketing safety study or trial is needed. We have enough information to
put this into documentation, and so it’s our intent to reflect these specific considerations
in either a map or a guidance that can be made available and therefore, more
transparent in the specific things that we think about. Michael outlined the
really broad framework for sponsor notification,
about planned ARIA analyses. And as we’ve committed to
in our PDUFA VI letter, we will continue to
refine those parameters and make, become more detailed in identifying when we
would notify companies about planned ARIA analyses, the extent of information
we would provide. And how we would do so. But for now, companies
can expect to receive, during the, if they have
submitted a marketing application, and FDA’s intending to do an
ARIA analysis for a new drug, we would notify them
through the approval letter that is issued by the Office of New Drugs. And in the post-approval
space, companies should expect to receive a formal communication
form of a standard letter issued from the Office of
Surveillance and Epidemiology. It’s been alluded to earlier this morning that a happy consequence
of all of this progress that’s been made is that we have even more regulatory questions to consider now. People have mentioned
earlier about considerations about whether we can use, or should use, Sentinel for data mining
or signal detection. Other things that we have to consider is, we make decisions based on the
best available information, and so we have to consider
where ARIA analyses and results fit in, in
terms of a hierarchy of considering the
information that we do wave. And then also, we sometimes
conducted an ARIA analyses when we’ve had either
uncertain information or contradictory information, and so then what do we do if the
last bit of information that we received is that from ARIA? Recognizing that in and of itself, the claims data and the
analyses that we perform do have their own set of limitations. So to sum up, I think we
know that observational data has been, and observational
studies, have been very useful to us in terms of evaluating
drug safety issues. Information that we’re
getting now from Sentinel is another form of observational data. And it’s really just been
tremendous to have this advanced, and adding to our observational data and study and armamentarium. I hope I made sense, I talked really fast. Thank you.
– Great, thank you, I appreciate you taking
all of the presenters and going through this rather quickly. So I’m gonna use my
prerogative as the moderator to allow us to go a few minutes over. It’s really important that
we have the opportunity for the audience, we might have
time for one quick question. First one to the mic. While they’re doing that,
I do have one question that came up a couple of times, IMEDS. We are gonna hear about
IMEDS later in the afternoon, but from a user’s perspective,
not from the FDA’s, and if you don’t know, IMEDS is a program offered by the Reagan-Udall Foundation to enable the industry to have access, and to engage with Sentinel
System partners and tools. Michael, you mentioned ARIA and how it’s becoming an integral part of FDA processes and decisions, even the approval letter
that included a planned post-market analysis
using the ARIA system. Brian, you mentioned that
stakeholders would really like to collaborate
better with the system, and mentioned IMEDS as well. So my question to Michael, or Mwango, is, when you’re thinking about sort of ARIA in the post-market system,
as part of an approval, is that for companies to perhaps engage with IMEDS to do that, or is that more of a planned analysis that
FDA would be doing? So with IMEDS available
for companies to do, how are you seeing whether
or not FDA’s gonna be using the ARIA System, or if the company’s going to be doing that under a PMR? – So thanks for the question. So ARIA, as I said before, is the tools and the electronic data
in the Common Data Model. If we made a decision or a determination that ARIA is not sufficient
to study the safety concern, we will issue a PMR,
generally speaking, for it. We’ve created IMEDS to open
up Sentinel to industries so that they could satisfy
the PMR in Sentinel, if they wanted to, and
through that interaction, then you use the normal
bilateral relationship between FDA and industry
in the context of a PMR to collaborate on that PMR. But in general, we don’t
specify a system ever. But certainly it’s a
system that is available and that we’ve accepted as a source. – Great, thank you. I’m gonna, so nobody,
so I’ll take one more. And this is to Azadeh
about the BEST system. So we heard quite a bit about
the system this morning, there’s sort of the
rationale and the needs for blood products and the like, in terms of richer data to sort of be able to measure the important
exposures and outcomes. And you certainly talked
about it in your presentation. But for those who, in the
audience, who may not be fully aware of the
differences in the system, BEST, you showed a lot
of data availability, including a lot of
electronic health records. And the Sentinel system that
Harvard Pilgrim is operating, also has data partners that
have electronic health records. Is there overlap or is there a plan to, for some analyses to dip
into sort of both systems, because there might be overlap in patients and in data types? – Well, I think the
Harvard Pilgrim component of the Sentinel program,
CBER Sentinel program and the BEST initiative component, each one has its own place. With respect to data specific to one group versus another, I think if I’m not mistaken, about perhaps 10% of the data providers for the Harvard group have
electronic health records. Versus the BEST initiative, which perhaps provides us with more EHR data sources. But I think the issue is more related to how that data is actually utilized and that would be related to the common data model. Whether a data model is
expandable and flexible enough to be able to quickly adopt
these existing and new coming coding systems, such as the
ISBT coding system, or not, that would be an issue that
would make a difference for us. Otherwise, we are not,
we are kind of agnostic to what common data models should be used, as long as it can provide us
with the data that we need. And also, the data lag is another issue that we need to have access
to data that are more recent, than for example, nine to 12 month lag that we get from the Harvard system. And the other issue is
related to the tools that we have, for example, HCA EHR data available, but our tools are not,
the Harvard Pilgrim tools are not ready yet to run queries on the EHR data that HCA provides for us. So I think each system has its own place and advantages and disadvantages. But these are some of the issues that we have experienced
with respect to blood. – Great, thank you. I’d like to thank all of our panelists for such great presentations and comments. I’ve got good news and bad news. The bad news, I’ll start with that, is I’m taking five minutes
away from your lunch. So we are gonna start
back here exactly at 1:15. But the good news is you can
bring your food back here, if you’re running out of time. And you certainly bring it in here. There are restaurants in the area, and Sarah at the registration desk can give you a sheet that
has some of those listed. Thank you.
(audience clapping) – All right, thank you.
– How are you? (people chatting) – Set of panelists to join
me up here on the stage. Hi, how are you? Good to meet you. (people chatting) Should I get started? Yeah, okay, evolution
of the Sentinel System. In this next session,
we’ll discuss some examples of how access to and use
of the Sentinel System’s tools, data, and
infrastructure is expanding. And in increasingly more diverse ways, Sentinel is being used to
generate a range of evidence. We heard a little bit
about that this morning, this next session includes
a set of panelists who all have various
experiences in engaging with the Sentinel System,
or in the case of CNODES, complementary systems in Canada. Joining me on the stage
are Allison O’Neill, Epidemiologist at the Center
for Radiologic Health at FDA. Chris Granger, Professor of Medicine and Director of the
Cardiac Intensive Care Unit at Duke University Medical Center. Robert Platt, Professor,
Departments of Pediatrics and of Epidemiology, Biostatistics, and Occupational Health
at McGill University, and Co-Principal Investigator
of the Canadian Network for Observational Drug
Effects Studies, CNODES. Beth Nordstrom, Senior Research Scientist and Executive Director of
Epidemiology at Evidera. And finally, Claudia Salinas,
Senior Research Scientist, Global Patient Safety at Eli Lilly. And I think I’ll go ahead and
turn things over to Allison. You’re welcome to.
– Is there a clicker? – [Gregory] Yeah, I’ve got it right here. – Oh, thank you, good afternoon. My name is Allison O’Neill,
and I’m an epidemiologist in the Center for, whoops, there we go. Devices and Radiological Health at FDA. And I have been asked to present today to describe the use of the
Sentinel infrastructure for the evaluation of a medical device. I’ll also touch on key
challenges with monitoring safety of devices versus drugs,
as well as opportunities to continue leveraging Sentinel for medical device surveillance. We performed a device evaluation
of use of laparoscopic power morcellators, which
I’ll refer to as LPM. So just a bit of
background, uterine fibroids can be removed via
laparoscopic procedures, which are also referred to as
minimally invasive surgery, which are performed with or without LPM. Benefits of minimally invasive surgery may include shorter hospital
stay, faster recovery, reduced wound infection, and
reduced incisional hernias, compared to using a
more traditional larger abdominal incision known as a laparotomy. LPMs are a class II
device, which are inserted into the body and used
to cut fibroid tissue into smaller pieces for easier removal. However, a safety concern
has been identified with LPM, which is that uncontained intraperitoneal morcellation of uterine
tissue which contains an unsuspected malignancy can result in dissemination and
upstaging of that malignancy. The FDA estimates that between one in 225 to one in 580 women undergoing
surgery for fibroids may have an occult uterine sarcoma. So an immediately in
effect guidance document was issued by FDA on April 14, 2014, to communicate new
contraindications and a black boxed warning for LPM during
laparoscopic surgery. And after FDA issued
the guidance document, there was anecdotal evidence
that the availability and use of more morcellators
decreased drastically, due to actions by some
manufacturers, payers, and hospitals. Therefore, some asserted that
use of minimally invasive procedures decreased as well. This is a table published
online by FDA in December that shows seven publications
reporting on the changes of percentages of various
procedures between 2013 and 2015. It generally suggests that
the percentage of women undergoing open abdominal procedures was higher than prior to 2014, although reported changes
varied between reports. Two reports showed a large
decrease in the use of LPMs, but there were mixed results
for any change in percentage of laparoscopic and minimally
invasive surgical techniques, although they remained a common
procedure in all reports. So we decided to analyze the Sentinel data to assess the rate of LPM use
and laparoscopic surgeries before and after FDA’s 2014 actions. So we wanted to look at the
following study questions. Did use of LPM decrease after our safety communication in 2014? And also, did the proportion
of procedures performed laparoscopically also decrease in favor of abdominal or vaginal procedures? Here’s just a quick look at our methods. The date range was 2008 to 2016. All women with six month
continuous enrollment and no previous hysterectomy
were considered eligible. We then identified women
receiving hysterectomies or myomectomies to remove fibroids, except for radical
procedures to remove cancer. And we grouped them into laparoscopic, abdominal, and vaginal procedures. We then used a HCPCS
Level II code to identify procedures that used a morcellator. For our results, this slide shows the use of the morcellator code over time. The vertical purple line
shows the month of April 2014, the date of the safety communication. And the use of morcellators,
as identified by this code, had been increasing from 2008 to 2014. And was then seen to decrease for both laparoscopic hysterectomy,
shown by the red line, and laparoscopic myomectomy,
shown by the light blue line. After April 2014, there was
no evidence that uterine fibroid surgeries
decreased during this time. And this one shows the
percentage of procedures which were performed laparoscopically with or without morcellation. The percentage of hysterectomy procedures performed laparoscopically
shown on the left was rising from 2008 to 2012,
when it plateaued around 60%. It declined slightly around
2014, but stayed around 60%. The percentage of myomectomy procedures shown on the right was
rising from 2008 to 2013, when it plateaued around 35 to 40%. After 2014, it decreased below 35%, but it increased again
to its original peak. So therefore, it is possible
that the decreased use of LPMs after the safety communication
may have been associated with a plateau or temporary decrease in the laparoscopic approach
for these procedures, but it did continue to
be used at a high rate. So to summarize, we observed
that the use of LPMs, as reported by this code in hysterectomy and myomectomy procedures
drastically declined after FDA’s safety communication. The percentage of procedures
performed laparoscopically experienced a small decline and a plateau, but a large decrease was not seen. This indicates the patients
are still receiving laparoscopic procedures
without morcellation, which have benefits for patients
recovering after surgery, such as those discussed earlier. In 2017, FDA conducted an
updated assessments of LPMs, including in this
analysis as well as review of literature and medical
device reports, or MDRs. Available evidence was
consistent with our previously estimated prevalence of occult sarcoma, and worse patient outcomes
for morcellated malignancy. FDA continues to warn against using LPM in gynecologic surgeries to treat patients with suspected or confirmed cancer, and in the majority of
women undergoing myomectomy or hysterectomy for uterine fibroids. The Sentinel data was valuable to the team in an attempt to assess the impact of our safety communication
on clinical practice. The analysis confirmed that
the use of morcellators declined as recommended
by the communication across multiple healthcare systems. So the main challenge for this analysis was related to identifying
the procedures using LPM with a CPT or ICD code,
which is a challenge that is relevant for any
source of claims data. As we heard this morning,
regarding some unique data needs for medical
devices, devices do not have a National Drug Code and
we must rely on a CPT or ICD 9 or 10 procedural
code used for billing. The procedural code may or may not capture the use of one or more devices. And in many instances, the
brand of device is not known, unless there’s only one brand
on the market at that time. In this case, we used
a Level 2 HCPCS code, which is simply labeled morcellator, and it does not provide detail
regarding the type or brand. It was also difficult to
assess the consistency with which that code is
or was used in practice. And finally, we kept this
simple with a Level 1 analysis of background
rates for these procedures to get a snapshot of how
the rates changed over time without more advanced
time series analyses. So there have been challenges
associated with the use of Sentinel data to assess
use of medical devices, due to some of the obstacles
and limitations discussed. However, there are
definitely opportunities to utilize Sentinel for
more analyses for devices that can be identified
using procedural codes. And in the future, as the
Unique Device Identifier, or UDI, continues to be
adopted and incorporated into EHR and claims data,
we’ll be even better able to identify use of the
specific device of interest. Additionally, CDRH is in the
process of building NEST, which is our National Evaluation System for health Technology, which
will be a collaborative national system to link
and synthesize data from different sources across
the medical device landscape to generate evidence across
the total product lifecycle. As you heard from Greg
Pappas this morning, the development of
coordinated registry networks within NEST provides the
opportunity for potential collaborations and data linkages. As part of our long-term goal to expand our ability to assess real-world safety and effectiveness of medical devices. So in conclusion, this
was a valuable resource to the CDRH team to
assess background rates of device use and possible
impact of FDA’s actions on real-world use and practice. And it provides a demonstration
of how the Sentinel infrastructure may also be
used for medical devices. As CDRH moves forward in the development of NEST, using UDI and
device registry networks, and as the Sentinel System
also matures and evolves, for example with the use
of EHR, there will be great opportunity to perform
more detailed and accurate assessment of device
performance in the real world. Thank you.
– Great, thanks, Allison. We’ll turn to Chris.
– Great, thanks. And I am delighted to
talk for a few minutes about the IMPACT-AFib trial, which you’ve already heard
about from Rich Platt and others this morning. But this has definitely been
one of the most exciting and innovative things that
I’ve ever been involved with. And I will share some thoughts about it, but begin by noting that
there was a Collaboratory Grand Rounds on January 5,
that went into the project in some detail with
the link here for those who are more interested in hearing more complete details about the program. I’ll also just mention that this morning reminded me that there is this fine line, as we talk about evidence
generation and safety. And that, in fact, in this
case, which I’ll make the case that here we have an example
where we have a treatment that prevents almost all
strokes for a common condition that’s only used in about
half of patients for whom guidelines indicate
that it should be used. And I think that’s also a safety issue. That when we have a
clearly effective treatment that’s not being used,
that that’s a public health safety issue as well as efficacy issue. The genesis of this program
came in part from a partnership with Sentinel and the
Clinical Trials Transformation Initiative back in 2014, with a plan to use Sentinel to explore conduct of randomized clinical trials. And so IMPACT-AFib then is
the first of such trial, with this direct mailer to
health plan members who are at high risk for stroke and
not on oral anticoagulants. To encourage these members to go and discuss with their
provider whether or not they might be on an anticoagulant. Rich has shown you this here, the five data partners on the left, and the research partnership on the right, including the patient representative, and the Clinical Trials
Transformation Initiative. Atrial fibrillation is
the common arrhythmia. It’s more common in older
people, it occurs in, about five million Americans have atrial fibrillation. It causes about one in
five strokes, or about 20% of all strokes are caused
by atrial fibrillation. And we can prevent about
70% of those strokes with oral anticoagulation. So we have an extremely effective therapy for preventing those strokes. But what we see in a variety of data sets, but most relevant from
the Sentinel data itself, that only about half
of patients have been, in the data set, who
have atrial fibrillation, risk factors for stroke,
categorized by this commonly used clinical risk score,
the CHADS-VASc score, have been treated with
anticoagulants, and even fewer are currently being treated
with anticoagulants. We did a cluster randomized
trial published last year to show that a largely
educational intervention can improve the proportion
of patients treated with anticoagulants who have
indications for that treatment. And that that treatment was also, that that intervention
was also associated with a significant reduction in stroke. So the study design then for the IMPACT-AFib
trial is to take patients who have atrial fibrillation
based on at least two claims, one in the prior year, CHADS-VASc of at least two, no recent admission for bleeding, and not on any anticoagulant
for the past year. Defined as no claim for a prescription or INRs that have been
obtained which could identify patients who are on low-cost
Warfarin, where they might have paid with cash and not entered a claim. And then the plan is to
randomize 80,000 patients. Now Rich tells me that as of today, we’ve got 38,000 of the
intervention mailings have gone out. So we’ve got a total
of 76,000 patients now randomized and anticipate the
remainder of approximately 80,000 to be randomized,
and with the intervention implemented in the next couple of weeks. The aim is to increase the
use of oral anticoagulation, defined as at least one
prescription filled, but we’ll look at a
variety of other outcomes. Iincluding clinical outcomes,
admissions for stroke and bleeding, and the number of patients who remain on an oral anticoagulant based on claims at the end of one year. The intervention, Rich has shown this, primarily focused on patients, with a carefully constructed
packet from the FDA, Harvard, Duke, and the data partners that outlines the rationale
of why it’s so important for patients on atrial
fibrillation to be on an anticoagulant to prevent
stroke, including a pocket card designed to facilitate
the interface between the patient and the patient’s
healthcare provider. With the intent of using the
patient as an agent of change, to help prompt the provider
to treat the patient with an oral anticoagulant
if that is appropriate. The providers also are being notified, but we actually don’t expect that to be a major part of the intervention. And then after one year,
even in the control group, the providers will also be notified. And this is the series of
mailings that have occurred, again, now up to 38,000. And of the patients identified
with atrial fibrillation, 36% of them are the total of, about 40,000 patients were
eligible for the study. Importantly, we also did not require, or did not obtain informed consent, because of fulfilling criteria
for waiver of consent. So we can truly have an
unselected population and a truly pragmatic trial
to test the intervention. I’d like to acknowledge the team of people who have worked incredibly
hard on this trial, that are shown on this slide. And I’ll conclude then with the thoughts that the IMPACT-AFib team
has shown that it is possible to conduct the sufficient
large pragmatic randomized clinical trial using
the Sentinel platform, and addressing a critical health issue. And now that we know that it can be done, it’s important to explore
other opportunities to do further randomized trials with
the Sentinel infrastructure. And the types of interventions
that are appropriate at this point include
low-risk simple interventions, like patient or provider education, that are addressing large gaps in care with large potential to improve
important clinical outcomes. And I think there are
lots of examples of that. Ideally, or at least
something that I think is particularly important
in the IMPACT-AFib trial, avoiding the need for patient consent, in order to get a truly
unselected population. And some of the possible
clinical gaps to address might include statin intolerance. So here we have, especially for patients with established vascular disease, a life-saving therapy that
prevents myocardial infarction, stroke, and death, with about 20 to 40% of patients having perceived intolerance, but knowing that only about 5% of the time is that true intolerance. So lots of patients not
being treated who could be. Another example is diabetes
with vascular disease, where we know that only
about 14% of those patients are treated with what the
American Diabetes Association deems is, or the treatments which have an evidence base for use. Thus, a major opportunity
also for improving care there. And there are lots of other opportunities. I think there’s a long list. And one of the keys, I think,
is to identify the most important opportunities, to
identify funding sources, including the NIH, and
then do feasibility studies to see what might be most appropriate to address in the Sentinel infrastructure. Thanks.
– Great, thank you very much, Chris, Robert. – Thank you, perfect, okay. So I’m gonna be talking
about a little bit about the Canadian Network for
Observation Drug Effect Studies, which a few of the speakers
described a little bit this morning as the Canadian Sentinel. And some of the work that
we’re doing now together with Sentinel on some collaborations. I have a couple of boilerplate
slides that I’ll skip over. The bottom line is CNODES
is a national network with sites across Canada. My colleague, Samy Suissa is the Nominated Principal Investigator. The coordinating center, and the bulk of the administrative
and methodological team is located at McGill in Montreal with us. This is also important to note, it’s funded by our Canadian
Institutes of Health Research, which is our NIH equivalent as a contract, as opposed to funded directly
through Health Canada. So our processes at
CNODES are very similar to what’s done at Sentinel
on a slightly different scale and with some different approaches. We receive primarily safety
queries from our stakeholders. And our main stakeholder is
Health Canada, similar to here. We’re asked then to, based on
the query developer protocol, refine the query, develop a
statistical analysis plan, implement a statistical analysis plan, and then do this in a distributed
way, similar to Sentinel, where the analyses are done in
the individual CNODES sites, brought back to the coordinating center, and then meta-analyzed. Again, in a process that’s
similar to Sentinel, but with a lot more
hands-on than what you heard about this morning with the distributed analytic tools that Sentinel uses. This is an example of a CNODES output. Looking at high-potency
statins and diabetes, it essentially looks like
a standard meta-analysis with the different
Canadian sites as studies. You’ll note in there that
we include CPRD from the UK and MarketScan from the
US as sites as well. And this is primarily
to increase sample size, and in the case of the
CPRD, and in both cases, to get comparison groups
relative to Canada. Because as Richard said this morning, big data are sometimes not very big, when you actually get to
analyzing the numbers, and you’ll see some of the case numbers are quite low, even in
some of the bigger sites. Ontario is one of the biggest,
is the biggest province, and there’s still relatively
few end points there. I’ve shown this slide several
times in the last few months, some people may have seen this before. This just illustrates a
typical CNODES timeline, and the reason I’m showing this is it takes a long time. And people, the speakers
this morning made reference to this as well, it takes
a long time to conduct one of these distributed network studies where you’re developing the
study, the analysis plan, the analysis, the code, et cetera, on a study-by-study basis. So this one, as an example,
took about two years from the initial query to
finalization of the results. Some of the advantages of this approach, I think are fairly obvious. One, we have a lot of
analytic flexibility, we can do, we’re not
limited by the tools that, by the analytic tools,
we can design a study as complex using whichever data sources from each site as we want. We can use statistical
methods that may or may not be part of the standard toolkit
in a distributed setting. We’re also doing a lot
of capacity building in this process, as I said. We have analytic sites in each of our provincial data holders. And we’re building
capacity in data analysis and in methods across the
country, as part of this process. And then finally, and this
is the case with Sentinel as well, the query author
is actively involved in the question refinement
and the protocol development, so that we are actually
sure that Health Canada’s problems are being
addressed, or their questions are being addressed by CNODES studies. So the obvious problems or
challenges with the approach that we’ve undertaken up
until the past year are, the obvious main challenge is timeliness. Data access is a problem
in some provinces, but that’s really sort
of a different question. But this efficiency in
protocol development, every time we’re
developing a new protocol, it takes time, a new
analysis plan takes time. If the code is something
we’ve done before, if the definitions are
things we’ve done before, we can gain some efficiency there, but we’re still essentially
designing a new study each time. And Richard and others
made reference to this this morning, that this
is, can be a slow process. We try to use standardized definitions and as much as possible,
we use standardized and tested SAS code, but that’s
in a somewhat informal way, because we haven’t been developing this sort of as a package in the same way that the Sentinel team has done so. So this led us to about a year ago, a little under a year ago now, to decide we wanted to try to implement a common data model and implement some of the similar tools
to what Sentinel uses. We were funded for this
pilot extra project in April of 2017, and we
started with our three sites that have essentially live data access. Where the goal was, well,
we should be able to produce query responses in two to three weeks. We implemented more or less
the Sentinel Common Data Model tables, enrollment demographics,
dispensing, et cetera. And we followed very closely,
or are following very closely, the Sentinel Common Data Model structure. And I’ll talk a little bit later about the reasons why we’re doing that. I think it’s important
to have that discussion. And then we’re working through some, we’re planning to work through some demonstration queries with
our advisory committee and with Health Canada
in the coming months. So far, we’ve developed
the table conversions, we have the data in the
Common Data Model format. It’s relatively straightforward, it’s a bit time-consuming,
just because people have to go through the tables and modify, write code to modify the
data structures a bit. We’ve discovered a
number of minor decisions that we’ve had to make along the way that have to do with fields or structures that are different in the
Canadian sites than in the US. And we’ve prioritized the queries, we know which queries
we’re going to attempt when we get to actually
have the data online. Validation of the CNODES Common Data Model is in progress right now. I had hoped that we
would be able to present some of the validation work to you today, we’re a little bit
behind schedule on this, ’cause for some technical reasons. We’re going to implementing
standard Sentinel QC tools, and then reporting back to
Sentinel on our validation steps. We’re also going to then run some queries three different ways. We’re gonna use some simple queries, do it using the Common Data Model, using CNODES standard
tools, and then using the Canadian Institutes
for Health Information, which is another data holder in Canada that can do drug utilization queries quite straightforwardly as
a third validation step. Informally, also, we’re working to collaborate with Sentinel. And we’re working to sort of translate and then back-translate
some CNODES studies into the Common Data Model and then back into the standard CNODES format. We’ve done it MarketScan,
Rich presented some of those results at the ICP
meeting this past summer. And we intend to redo that
in a Canadian site as well, just to test the
compatibility of the systems. So I’ll speak just briefly on why we chose to work with the Sentinel
Common Data Model. There are lots of other approaches we could have taken to
standardizing our data. Most of them have been pragmatic, though. We have a close relationship
with the Sentinel team. We’ve worked well together on a relatively informal basis so far. They have a demonstrated
process of working with their regulator
and that was appealing to our regulatory colleagues to say they knew how the workflow could work, or they could observe how the
workflow worked at Sentinel. Our core data tables are
insurance administrative claims data sets, so the
data sets match very nicely with the ones that they worked with, so it was an easy mapping
from one set to the other. And as I said earlier, we’re
taking advantage of Sentinel’s well-established data
quality assurance processes and procedures so that we
know that we’re getting good, we’re getting good data out of the CDM. And finally, and this is perhaps
one of the most long-term, but exciting perspectives
on this is the potential for cross-jurisdiction
collaborations between Canada, and Health Canada, and the
US FDA, and potentially, down the line, the EMA, and
some of our European colleagues for multi-national studies of drug safety. Very briefly, there are
some technical reasons why we chose the Sentinel
Common Data Model as well. I’ll really focus on the first one. Given that the query tools
that Sentinel has produced and made available are readily available and match with the
Sentinel Common Data Model, and are of proven value to FDA
and the regulators in the US, it’s natural that we
can use these in Canada. In the future, as I said, we are hoping that this will lead to rapid responses to simple queries from Health Canada. And it should enable
these cross-jurisdictional collaborations that I mentioned. And I’ll just focus on this last bullet, you here in the US
obviously have a much larger population than we do in Canada, but with our public insurance programs, we have the good fortune
that we have a much longer average followup, people
tend to stay in one province and stay therefore with
the same insurance provider for a long time, so we
can look at much longer followup studies on average
than may be possible in the US. Some last concluding thoughts. The Sentinel data structures
and the analytic tools have been easy to implement. As I’ve said, and I’ve repeated this, the possibility for
Sentinel/CNODES FDA/Health Canada collaborations is seen as a real strength. And this is what we’re really hoping, and our Sentinel
colleagues have shown this, that the Common Data Model will advance our response times for some queries. In particular, for things
like utilization queries and event rate queries, so
that we can get rapid responses to quick questions from Health Canada. We don’t think, and I think we heard this this morning as well, that
it will eliminate the need for those complex studies that I showed in the first set of
examples, where we use our, we design a study from
scratch, but it will help us reserve those for the studies
that are really needed. And thank you very much. – [Gregory] Great, thanks, Robert. I’ll pass things down to Beth. – No, there we go. All right, I’d like to start
by acknowledging the team, who worked on the study that
I’ll be telling you about, including Jason Simeone and
Samuel Huse from Evidera, and Kwame Appenteng and
Milbhor D’Silva from Astellas. Now, back in the day, when
the Sentinel Initiative was still mini, and they
started releasing things like protocols and programs
out into the public domain, some researchers realized that
they could use these tools, not just to see what was going
on in the Sentinel program, but also to do their own studies. So all you had to was take your claims, or possibly even the structured
field from an EMR database, get them into the right
format, and then run them through your very own
of a Sentinel analysis. So that’s what we did. There was a Mini-Sentinel
project looking at safety of mirabegron, a
treatment for overactive bladder. And the outcomes in that
study were MI and stroke. And we set out to replicate
that study exactly, but using different data
sources, specifically, the MarketScan and PharMetrics
large claims databases. Now our approach, unlike
something that Rich Platt referred to this morning, this was really quite simple and streamlined. You’ll notice that we didn’t
start by writing a protocol, because we were able to simply use the published Mini-Sentinel protocol. So our first step was
just to transform the data into the Common Data Model. Then we created lists of the drug codes that were needed, built
input configuration files, and ran them through
the programing modules that did all of the cohort
selection and analysis for us. Then we did some extra
descriptive tables on the side. We did a little bit of
reformatting of the results and wrote it all up into a report. It was a very streamlined process really, compared to typical project. Now I won’t walk you through
all the detail on this slide, you don’t really need to worry about the inclusion and exclusion criteria, but if you can look at
the bottom two boxes, there were actually
eight different analyses that we did in this. So there were two databases,
there were two outcomes with slightly different patient
populations going into each. And then there was a primary
and a secondary analysis based on whether the
patient had previously used treatments for
overactive bladder or not. And within each of
these separate analyses, there were tens of thousands of patients included in the analysis. So you can see that this was by no means a small-scale study. Now the analysis that
was done by the module did propensity score
modeling and matching. It output all of the descriptive results on the events, the
denominators, and it ran Cox models comparing the risk
of each event in mirabegron to oxybutynin, which is
another OAB treatment. Now the module didn’t
automatically produce the baseline pre and post
matching descriptive tables that we wanted to see, so
there we did just a tidbit of our own custom programing
to create those tables. Let me walk you through just some high points of the results. In essence, the MarketScan
matching balance worked very well, but PharMetrics, the matching didn’t work at all. The model didn’t converge,
and they way that the program was set up was if your
model doesn’t converge, you don’t get matched
results, end of story. And there was nothing
you could do about it. Now this was a version of the
program from a few years ago. And of course, it’s
been updated since then. I understand that this
is no longer an issue the way it was back then, but
you can imagine at the time, this was something of a
stumbling block for us. But nevertheless, we went ahead and we ran our outcome models. And we found no evidence of an increase in MI with mirabegron. And similarly, no evidence
of an increased risk of stroke with miragbegron
compared to oxybutynin. And these findings lined up very nicely with the findings from
the Mini-Sentinel study. So we did succeed in
replicating their results. Now let’s talk a little
about what was good about this process and
what was not so good. There’s one huge advantage,
and that’s the speed. It really was very quick. You can trim months off the typical study timeline doing something like this. There’s also the reassurance of knowing this protocol has been
very well thought through, it’s been scrutinized by so many people. It’s really unlikely that there’s some fatal design flaw in it
that no one’s noticed. Similarly, for the programing, it’s all been thoroughly validated, not likely there’s a bug
hiding in there somewhere. Also, there are some ways
that you can customize things. I’ll get back to that notion in a minute. First, let’s talk about the
limitations a little bit. There’s one major limitation to this. The others are, I’d consider minor. But the adaptability of the programing is the one huge limitation
that we do have to recognize before going into using these tools. Any time you have something
that’s this automated and fast, you have to sacrifice some adaptability. So even though they’ve been continually expanding on what these tools can do, there’s always gonna be a limit to that. Now other more minor
concerns are the fact that, well, for instance, you
can’t put together a list of NDC codes in a protocol
and call it final, because they just come out
too often with new NDC codes. So some things like that,
you just have to go in and look up for yourself on the fly, you can’t just depend on the
published protocol for that. Some of the programing, we had a hard time telling exactly what was going on. And the programs took
a little longer to run than we thought they should have taken. This was not analyst time, it was just computer processing time, but still again, if we’d be
doing our own custom programing, we probably would could have
gone in and adjusted things in the programing to make
them run more efficiently for our particular situation. And finally, you might
need to do a little bit of your own programing,
just to get the output that you want to see in the end. But taken altogether, I would
say if your research question aligns neatly with an
existing Sentinel protocol, the benefits of using these tools far outweighs the limitations. Now what we did was a simple replication, just running the exact same
analysis in different data sets. But there are other ways
that you could branch out some from that and not just
do the simple replication. You do need to build
these configuration files, where you include, among
other things, your code lists. So maybe you want to try a
slightly different definition of your outcome variable
and run that through the programs as a sensitivity analysis. Again, that would be very quick and easy to do using these tools. Similarly, you could define subgroups based on some baseline
characteristics of patients. And again, see if your results
hold up in these subgroups. You can also veer far more
from the original protocol if you choose, and look at
completely different outcomes, or change the drugs
that you’re looking at. Now if you go that far from the protocol, you have to be careful,
because then, in essence, you have to come up
with your own protocol. You can base your protocol off
of the existing Sentinel one, but you do need to think
carefully about whether the study design and the analytic methods still make sense for your
particular research question. But still, if you can piggyback off of the Sentinel protocol and
use the programing modules, you can still save yourself a lot of time. And finally, if use of
these tools really picks up, it would be great if we
can find a way to share our findings with one
another, so we can see what one another is doing,
as we go this outside the Sentinel System use of these tools. I, for one, hope that use
of these tools does expand, because I think they’re a great resource. And they can be useful well
beyond their original purpose. Thank you.
– Great, great presentation. Thank you very much, so, Claudia. – Okay, thank you. So I’m going to tell you about
some work that Lilly has been conducting with the IMEDS
data and the Sentinel team. Lilly launched a query to provide context about the safety profile of patients with rheumatoid arthritis
who are receiving treatment with disease-modifying
antirheumatic drugs, or DMARDs. And in particular, we
interested in estimating the incidence of venous thromboembolism among these patients with RA. We were also interested in other queries looking at the individual
events of pulmonary embolism, and deep vein thrombosis. Results from the query were included as part of Lilly’s
resubmission for baricitinib. So we selected the IMEDS
data for two reasons. And I’m gonna give you a
little bit of background about IMEDS in just a second. But the first was size. Because we were looking at a rare outcome, we needed a large data source. And the second reason was the availability of validated tools for
conducting the query. So IMEDS is a public-private partnership that is manged by the
Reagan-Udall Foundation, And I know June Wasser
is in the audience here, to permit non-FDA researchers to access the Sentinel data and the Sentinel tools. And in this figure, you
can see the similarities and the differences
between when FDA accesses the surveillance database, essentially, the Sentinel distributed database, and when non-FDA researchers like Lilly access the IMEDS distributed database. So IMEDS is a subset of
the Sentinel database, and we access it through
the Reagan-Udall Foundation. From that point on, the process
is the same, essentially. Whether it’s FDA or a non-FDA researcher. And in particular, the
same tools are used, and you have access to
the same team of experts. So the members of the query
team were representatives of each of the groups that
I had mentioned just before. And in addition, early on
the development of our query, we had the opportunity
to discuss the query, and the motivation for the
query with the data partners. This also allowed the
data partners to ask us any questions that they
had, and allowed us to confirm our interest
in disclosing the results of the query with the broader community of RA researchers and clinicians. This was a win for both
of us in confirming that. So our study population
was defined as patients who had at least two RA diagnostic codes within the year before baseline. Patients were required to
be new users of DMARDs, and they were following
until they had an event. They discontinued treatment
with the medication that they were on at index
time, or they left the database. And we began with 75
million people represented in the IMEDS data subset of Sentinel, that was contributed
by five data partners, and in the largest treatment group, we had about 69,000 treatment episodes. Essentially, 69,000 patients. So I’m going to share
the results with you, but I want to tell you a
little bit about the process because it was somewhat different. So typically, we would
provide a high-level overview of the query that we were interested. We would contract, and then we would work to refine the query, provide
details, and execute it. And in this case, we
needed to actually get all of the details, or
most of the details, defined ahead of time,
so that we could decide the scope of the work. And when I say scope, I’m
talking about the modules. We had a descriptive query, since we were just trying
to define an absolute measure of the incidence rate. So we used a Level 1 module. And this is important because the module determines the number of
scenarios that you can use. The scenarios are
essentially the parameters of the query that use
for each treatment group. So an example would be patients with RA who are new users of biologic DMARDs and they’re be followed for VTE. So I will say that this step was probably the biggest learning curve,
especially for people who had not worked with
the Sentinel team before. And we also had to learn about
the limitations of the tool. In particular, the ability
to use combinations. And I’ll give an example
in our case definition. So once we arrived at a final
draft of the specifications, Harvard Pilgrim executed
a feasibility assessment in the local data, just to
look at the numbers of patients and the amount of person
time, so we could evaluate if there were any scenarios
that we wanted to eliminate, or that wouldn’t be practical. We chose to be blinded for
this, so that the results of the incidence rate wouldn’t affect what we retained or removed. And really, at this point, for us, we handed it off when we had
the final specifications, back to Harvard Pilgrim,
and a few weeks later, we had our results. Now that makes it sound like very easy, but I will throw in an additional wrinkle, which was that somewhere around the middle of the second step, we needed
to accelerate our timeline. And so between handing off
this final specifications and getting the results, and I thank you, June, out there, and
all of the team members. We were sending multiple
requests to clarify what the timeline was and
asking for status updates. There was constant communication
and abundant discussion. And it was as a result
of the strong partnership of the team that we were
successfully able to get our results and incorporate
them into our documents. So this slide is simply
intended to illustrate that there are three scenarios here. You don’t need to read
it, it’s just intended to reveal that there
are a substantial amount of detail in every scenario. Okay, so for our query,
we defined VTE, DVT, and PE, based on diagnostic codes. But for VTE, we used a
compound definition as well. And this is what I mean
about the limitations of the Sentinel tools for
using multiple combinations. So this required use of a different tool, the combo tool, and I’m
imaging a very large machine, where you feed things in at one end and something magic
comes out the other end. But we were limited in the number of combinations that we could use. So we chose this definition,
namely that patients who were diagnosed in the emergency rooms, or in hospital settings,
would be considered to have an event based
on diagnostic codes. But for any diagnoses that
occurred in outpatient settings, we also required them to have
evidence of anticoagulant dispensing within 31 days of diagnosis. This was done in an effort
to balance the positive, predictive value with the maximizing the ability to detect events. In other words, maximizing sensitivity. Okay, so here, this table
is showing the results for the two largest treatment groups, which provide the most robust estimates of the incidence rates of VTE. These are patients treated with conventional DMARDs and biologic DMARDs. And in particular, you
will see that patients who were new users of
cDMARDs had an incidence rate of 1.49 per 100 person years. And patients treated with
biologics had an incidence rate of 0.98 events per 100 person years. Now these incidence rates
are not adjusted for age, which is a strong risk factor for VTE. Or for other risk
factors, which may differ in distribution between
these treatment groups. So I will show you the next slide. Oh, so, right, so here, in this slide, I’m showing you the same
results as previously. In the grayed rows is the incidence rates for the cDMARD new users
and the bDMARD new users, just as previously. But in addition now, you’re also seeing the age-specific incidence rates. And there are two things to notice. One is that age is a strong risk factor for VTE, as I mentioned. This isn’t surprising, this is also true outside of the rheumatoid
arthritis population. And you see that the incidence rates range within the new bDMARD users
from 0.69 to over two events per hundred person years, as you move from the 18 to 49 year old patients
to those 65 and older. The same effect is seen for age
within the new cDMARD users. The second thing to notice
is that when you control for the effect of age, you
see that the difference between the new cDMARD users
and the new bDMARD users is attenuated to a large degree. For example, if you look
at new bDMARD users, in the 50 to 59 year old patients, the incidence rate is
0.65 per 100 person years. And looking at the same age
group in the new cDMARD users, you have the same incidence rate. Now this controls for the effect of age, but there’s certainly the
possibility that there are other risk factors whose
distribution also differs between these groups,
so really, you shouldn’t actually compare directly between groups. But it does show that
once you control for age, you reduce some of the difference
between the two groups. Okay, wow, this took much longer when I ran through it previously. And so we had some thoughts
about our own query, and about the IMEDS in
general, for future directions. We are planning, in the Lilly query, to have an external
presentation and a publication. And we are still working
with the IMEDS team to validate the case definition. We are also conducting
internal investigations using custom programing to
look at sensitivity analyses and replicate the Sentinel query. And for IMEDS, we would
recommend expanding access and public awareness of
the IMEDS access program. In particular, as I have
spoken to rheumatologists who are treating these patients, there’s interest in the
results that we’ve seen, but they’re not aware of this data source. And then because we were
looking at a rare outcome, I would suggest that considering
expanding to other Sentinel data partners, and potentially
other sources of data, even maybe primary data
collection, would be really useful. I know that David Martin
was discussion this morning, he’s looking at ways to
collect primary data. And I would hope that
that would be incorporated at some point into the IMEDS program. And then we would also recommend to expand to other non-FDA stakeholders. Not only for access to the data, but also for considering adding their data so that it could be evaluated as a whole. Okay, and then, I wanted to thank all of the partners who
were involved in this query. In particular, the strong
partnership of Judy Maro, Jane Huang, and David Martin,
who really contributed to the success of the query. And June Wasser for coordinating
our query, and for being very patient with our
requests about the status. Thank you.
– Okay, great. Thanks to all of the presenters. It’s been a wide-ranging set of examples of how to further the
Sentinel infrastructure, how to use it for other
things by different users. Great examples, I have a few questions of followup as folks in the audience think about questions that you might have. Maybe starting with the last, Claudia, and your experience
with the IMEDS program. I think you opened the
presentation saying that the query, or the initial safety questions that you went to IMEDS with was related to some regulatory
activity at the company. So the question is, given your experience in working with the IMEDS group, where do you see the best opportunities for that kind of an approach
to support a company? Is it responding to a safety question that the FDA might come
to the sponsor with? Or is it embedded as part of
a post-market requirement? Or what are some of the, I
guess, the best opportunities for that program, for
a company like yours? – Well, I think that, really, the size and the quality are the two
reasons that would drive, and certainly drove us,
to look at the IMEDS data. So for any rare outcome, you’re looking for a database that is
as large as possible. And had we been interested in
one of the individual outcomes more than the aggregate VTE, I
think that it would have been challenging, even with a
database as large as Sentinel’s. And again, having the validated programs that are sort of hands-off,
you define the parameters ahead of time, but anybody
starting with your parameters is going to get exactly the same results. It is very reassuring, both to industry, and I’m sure, to regulators. So that is another strength of this data. – Great, thank you. One quick question to Robert,
with your CNODES example. It’s a two-part question. The first part’s really fast. How long will it take us to get to a point where it doesn’t take two years? So you’re talking about bringing
in the Common Data Model to make things more efficiently,
how long will it take to get to a point where a query that’s run on the Sentinel System
here could basically be simultaneously run on the CNODES program? – So I guess our ambitious target is sort of April, May of this
year, but I think this summer is a realistic target to be able
to be at an operational stage. – So my followup related to
that is, is it that simple? The data that are used in
Sentinel, and as you indicated, for CNODES, are largely claims data. Those are there because
insured populations have medical encounters,
and one of the challenges with claims is that sometimes
the benefit structure of the health plan might have influence into the patterns of care that you see. Should we worry about, or should
we spend some time looking at differences in just
the utilization patterns between what we see in
the Sentinel experiences, versus what we see in Canada? – So the short answer to that
is yes, I think we should be looking at those sort of
patterns in our experience within Canada, because there’s variation. The systems are similar across Canada, but the specific
coverages, the formularies can differ from province to province. So we’ve had examples where
utilization patterns for, for example, one of the ones
we’ve talked about before, proton pump inhibitors are very different in different provinces because in at least one or two spots, they’re only covered as a second-line therapy
after H2 blockers. So we’ve experienced
that, where you’ll see a drug that is very common one place, and one pattern of associations
with outcomes in one place, looks completely different
in the next province over. And the people are the same,
but it’s the formulary, and the programs that are very different. – [Gregory] Okay, Dan? – [Dan] Hi, I’m Dan Mines from Merck. – Do we have a, can we get
the microphone on in the back? – Hello?
– Okay. – Hi, I’m Dan Mines from Merck. Thanks very much to the panel for very interesting presentations. I have a question for Chris Granger regarding, basically,
I was really impressed with the ability to do tests
of an educational intervention using randomized trial
within the Sentinel network. I’m wondering if there are any situations where you could actually envision a trial of studying the effects of
drugs using this framework? – Yeah, I think the answer is potentially, but it would, of course, in
terms of a pragmatic approach, it would probably need
to be commonly used, evidence-based treatments
that one might compare. And then one would clearly
need informed consent at that time, which would
then require selection. But in fact, the ADAPTABLE
trial done through PCORI, done through PCORnet, is leveraging some of these same issues, in
terms of at least supporting a randomized clinical trial
of two doses of aspirin. Although, that’s through PCORnet. But I think the answer
is yes, potentially, but the lower-hanging fruit is these lower-risk interventions. – [Gregory] Where you don’t need to get the informed consent? – Yeah, I mean, I think Rich
could comment on that as well. I think it’s been a remarkable,
remarkably important part of this highly pragmatic trial, where we can really, where
we randomize 80,000 patients at low costs and include
all eligible patients. – Great, so I have a question,
maybe I’ll turn to Allison. I think you started in the beginning, or somewhere in your
presentation you said, when we get UDIs in claims, there would be a big opportunity for
Sentinel to help NEST develop better evidence
for medical devices. Short of that happening any time soon, are there opportunities where
the Sentinel infrastructure can partner with NEST, or at
least partner with registries to help develop better
evidence for devices? Do you need to have UDIs
in the claims to do that? – Right, yeah, because
the adoption of UDI, it’s a big problem to tackle. And it’s taking a lot of work. And there is a lot of great
work being done in CDRH on this front, but
there are several steps. We’re starting with the GUDID database, which contains devices with their UDIs. And then there’s a lot
of work that will need to be done to get these
UDIs into the device labels. And then the bigger hurdle into the actual EHRs, claims data. So other strategies we
can use, as you mentioned, partnering with specialized
device registries, or coordinated registry networks, which provide an opportunity
to identify patients who have received treatment
with a medical device. And then through data
linkages, with sources of data, such as claims data, such as Sentinel, such as death index data,
then provide an opportunity to really look at longer-term followup data for these patients, absolutely. – Great.
– And also, I would point out a great example is the TVT registry, which Dr. Pappas mentioned this morning, which is already a great
precedent for this model. – Great, I’ll turn to Beth. You spent, in your project,
a lot of time looking at the ability for the programs,
the modular programs, or some of the Sentinel tools, to apply them to other data sources. And you listed some
limitations and some benefits, what would you say would be your biggest, your recommendation
for where to prioritize improving the tools, at least
the ones that you’ve used? – That’s a good question. I think one of the big
hurdles that we encountered was just transforming
the data into the common data model takes a fair chunk of time. So if we’re able to have
things more easily available, code out there to transform
the data, potentially, that would have saved us
quite a bit of time on it. Other than that, it
really just is this issue of adaptability, so the
more checks that can be worked in there to find ways that, okay, something went wrong, or
something’s not working optimally, and that gives you options for
making changes at that point. That really is what I would like to see as the top priority. – Okay, any other questions
from the audience? Okay, so I’ll keep going. Well, I’ve got one
more, and I’ll turn to– – This guy right here.
– I’ll turn to Chris Granger from Duke. Okay, so your recommendation
was maybe to start, you know, areas where
you could sort of do more randomization leveraging
the Sentinel infrastructure. Low-risk, where you don’t necessarily, you don’t have to go through the challenging informed consent process. Related to that, I suspect
that the first time going through this, you
may have identified a lot of opportunities to improve
efficiency the next time around. So based on what you learned,
you’re learning in this study, if we were to do this
more often in Sentinel, what are some of the
big areas where you can improve efficiency from your experience? – Yeah, so, and in fact, in
the Collaboratory Grand Rounds, we had some discussion about
this, about lessons learned. And they include some
things which you might, which are kind of common sense. That includes engagement
of the data partners early on to refine the
materials, the protocol, the regulatory issues,
basically to have the team kind of fully integrated
from the beginning. I think things have
actually gone pretty fast, in terms of the trial. But it was somewhat behind what
we’d originally envisioned, in part related to those challenges. Some of the key things were like getting Medicare Advantage on board. Rich was able to get
a letter from Medicare saying it was okay to go ahead and include Medicare Advantage patients in the trial, which was a key factor. There were a variety
of examples like that, that were important.
– Great, Rich? – [Rich] Hi, just two things. One is Sentinel licenses
the MarketScan data, and has transformation code. So if you are a licensed
user of MarketScan, we’re working through the market with IBM Truven to be able to release that code to licensed users, so– – Thank you, Rich. – Yes, thank you very much. – [Rich] All right,
we’re in the final stages of making that, for good
and sufficient reasons, we can’t just post that the
way we will post the code for transforming CMS data
into the Common Data Model. And then the other point
with regard to Dan’s question about studying therapeutic
agents, the example Ken Sands gave this morning
I think gives some sense of another way you might do comparisons of therapeutic agents
through formulary management. I’m not saying it’s a straight shot, but if you’re talking about
preparations that have the same indications, you
could randomize formularies. And that, it’s a different study design, it would teach you different
things, but it’s a way forward. – Great, thank you. Last question.
– Dr. Granger, has Duke, with all its
expertise over the years, when you built your own
stuff, now with Epic. Has Duke built an app where
Duke grandparents with AFib, horrible AFib, can actually say, yes, they’re complying each
day, taking their med, but yes, they had an episode,
they were just in the ER, because they accidentally
took their husband’s pills, and what the patients,
why don’t we do that? Why don’t we let people
opt in with an app? You know, 85 year old people and 80 year old people
actually do use iPhones. And they can say, you know, I mean, AFib is just a horrible illness. – Yeah, great question. And the answer is yes,
but not quickly enough. And not extensively enough. So we’ve done a randomized clinical trial, for example, to give healthcare providers best practice alerts
when a patient comes in with a diagnosis of atrial fibrillation, and is not on an anticoagulant. And we’ve had educational
programs for patients, but we have not had as comprehensive
a program as we should, and we’re actually in the
process of doing that. We’re doing a, we’re involved
with the Premier Group on a cluster randomized
trial at a hospital level of trying to improve anticoagulation use at the time of discharge and
a variety of other programs. It’s a key opportunity. I think at the same time that
we do this type of program, we also need to be working
with health systems in leveraging electronic
health record opportunities to better measure and feedback
care for this type of a problem. – [Audience Member] Are
you using the Epic database and data warehouse to actually right now proactively in the Durham
Chapel Hill research triangle, RDU, RTP area, are you
gonna go actively manage and grab anybody on an AFib med? – Not exactly, but we are, I think that kind of idea
is what we should be doing. And we’re exploring those things, yeah. – [Audience Member]
I’d be honored to help. – Okay, thanks.
– I’m real active with Dean Broome and her
nursing advisory council. – Great, thanks for volunteering to help. Okay, so that takes us
to the end of the panel. Thank you to all of the
panelists and presenters for great examples, very good discussion. We’re gonna go ahead and take a break now, we’re gonna start back in 15 minutes at about 2:45 for the final session. Thank you.
(audience clapping) – You all to head back in
and head towards your seats. And if our other
panelists could head on up to the front, I’d appreciate that, too. Thank you very much. We’re about to start this final session for the Sentinel Initiative
event, thank you. This group really likes
networking together. So the way that. Everyone, I’d like to welcome you back to today’s final session. This is one that intends to
build on many of the themes that we’ve heard about this morning, starting with Commissioner
Gottlieb’s keynote, as well as some of the
ideas that different leaders in different parts of FDA
brought up about the future of the Sentinel System, as
well as many of the ideas that have come up in the
last several sessions, focusing on some current
activities, new applications, and potential path
forward for many of those. So this is about a look into the future of the Sentinel system. What might the next
decade hold for Sentinel, and what are the important steps in the near term to help us get there? So we’re gonna take a
look at some of these possibilities from a
range of perspectives. We’re going to have
some time for discussion with the group, in addition
to their presentations. And as always, looking forward
to your comments as well. So we’re gonna start out
this panel with Bob Ball, who’s the Deputy Director of the Office of Surveillance and Epidemiology, at the Center for Drug
Evaluation and Research. Then Rich Forshee, the
Associate Director for Research at the Office of Biostatistics
and Epidemiology, at the Center for Biologics
Evaluation and research. Alan Williams, Associate
Director for Regulatory Affairs in the office of Biostatistics
and Epidemiology, at CBER, the Center for Biologics. Darren Toh, Associate
Professor in the Department of Population Medicine at
Harvard Medical School, and Harvard Pilgrim Health Care Institute. And the Director of Applied Surveillance at the Sentinel Operations Center. And Josh Gagne, Associate
Professor of Medicine at Brigham and Women’s Hospital
and Harvard Medical School and an Associate Professor in
the Department of Epidemiology at the Harvard T.H. Chan
School of Public Health. Next, Joanne Waldstreicher, who is Chief Medical Officer
at Johnson & Johnson. And Marianthi Markatou, who’s
Professor of Biostatistics and Computer Science at
the University of Buffalo. We’re very pleased to
have all of you with us. Each of them is gonna start
with some opening comments related to this look into the future, and then some time for discussion. So Bob, please go ahead. And everybody can just stay
seated for their remarks. – Thanks, Mark, so many of the themes that I was gonna discuss
have already been brought up, so I’m just gonna pull together
some of the highlights. But there is one topic that
we haven’t really touched on, which I’ll also like to mention. So first of all, just a reminder that FDA is committed to improving
the Sentinel System through the PDUFA-VI agreement. And that really, in a very broad way, outlines what we plan to do
for the next five or so years. This panel is engaged with the discussions out to 10 years, so that
gives us the opportunity for more creative thinking. We heard earlier in Michael
Nguyen’s presentation about how we’re using the
ARIA sufficiency analyses to help us identify the strategic areas where we need to do our
further development. And you can see from Michael’s
data that the biggest gap in ARIA sufficiency is
in outcome validation. So that’s where we’re going to be focusing a lot of our effort, and
I think we’ll see that being done in incremental process. First, starting with the
data that we currently have, in the current systems,
working very practically on improving traditional
chart review efficiencies. And then moving to other technologies, such as machine learning and
natural language processing, using the data that we currently have, but really looking
forward to how we really would like a system with
computable phenotypes to really operate within
the context of Sentinel. The other big theme is signal detection. And at FDA, we always go back
to our enabling legislation. And in this case, it’s FDAAA 2007. And Michael talked about the
provisions that drive ARIA. But there is another mention
in there about a goal of signal detection using
population-based data sources. And it’s not quite as explicit as the ARIA requirement tied to the
post-marketing study requirement, but it is something
that we take seriously. I think we have done some work in this area, within Sentinel. And many of the questions, the technical questions, still remain. What methods, what data? There’s the philosophical
question of data reuse. But there’s more pragmatic
questions as well, such as, where does
population level data fit in the existing paradigm of the way FDA and the pharmaceutical
industries use all sources for signal detection with refinement and validation in
population-level data sources, which is really the basis
for how we use Sentinel now. So the other pointer that
we haven’t discussed today is the role of the Sentinel
in pharmacoepidemiological methods development more broadly. I think Rich Platt said
that what’s really happened in Sentinel over its original 10 years is we’ve incorporated standard methods in automated tools, and
I put in place processes for making that work very efficiently. So the question is what
should we be doing, though, to address the more fundamental problems of epidemiology and bias control? And what new approaches might we consider? And are there clearly
agreed upon approaches now that we can implement in
more specific processes that would continue to advance our ability to address safety questions? And lastly, the biggest
challenge for Sentinel is that all these issues
have to be addressed in the context of a distributed network with different types of
data and data partners. And we also have to meet the
100 million life requirement that’s specified in
FDAAA, not an easy task. And all the while, maintaining patient privacy and data security. I’ll stop there.
– Great, not easy task, but thanks for teeing up
these issues for the future. And next is Rich Forshee. – Hey, good afternoon, everyone. It’s been a great meeting so far today. I’m really happy to see
everyone who’s here. You’ve heard a number of presentations from other people at CBER
this morning that have talked a little bit about how we
have some special needs because of the kinds of
products that we regulate. So I’m gonna take just a couple of minutes to explain a little bit more about what our regulatory responsibilities are and how they effect the
data needs that we have. So at CBER, there’s
three major product areas that we’re responsible for. The first is vaccines, the second is blood and blood products, and the third is tissues
and advanced therapies. There’s some unique things about each of those product classes that affect the kind of data and
analysis that we need to do. Starting first with vaccines, one thing that’s very
different about vaccines is that vaccines are given
to healthy individuals in order to reduce the risk of some infectious disease in the future. This is very different than using a drug to treat someone who already has a disease that’s causing them some problems. Also, many of the vaccines
are given to children, which have a particularly
high safety threshold for. And we also have a special
issue with influenza vaccines, because these are given annually, and they’re given in a
very compressed time frame. In a few months, we have about 100 million or more vaccines that are administered. So as a consequence, we
have a very low tolerance for any risk for this vaccines. Certainly, compared to cancer therapies or drugs to treat other
very serious diseases that people already have. And we also have a need
for very timely data. If there’s a problem
with a seasonal vaccine, whether it’s an issue of efficacy, or any safety issue that we may find, we need to know that as soon as possible. With blood and blood products, I just want to mention that, first of all, blood donation is something
that’s very common. And you can get many
different life-saving products out of any given blood donation. Blood transfusions are really very safe, but there are still
remaining safety concerns. One of the concerns is
infectious diseases. Certainly, in the early
days of the HIV epidemic, that was a critical concern. And also with emerging
infectious diseases, we need to be able to
respond quickly in that area. CBER is also, make sure
that we want to ensure the safety of blood donors as well. Making sure that they’re
healthy enough to donate, that they aren’t donating
too frequently, for example. And finally, in the blood
and blood products area, we need to balance all
of these safety concerns with maintaining and
adequate supply of blood to meet all of the medical
needs that we might have. Just mention a few things about tissues and advanced therapy,
we regulate a wide range of human cell and tissue products. This includes things such
as bone, skin, corneas, ligaments, tendons, oocytes, and semen. A very, very wide range
of tissue products. And we also are responsible
for certain advanced therapies, to give you just a couple of examples. Last year we approved the first two chimeric antigen receptor T cell products. One for treating leukemia, and one for treating
large B cell lymphomas. We also approved last
year the first directly administered gene therapy product. And this was treating a
mutation in a specific gene that can result in blindness. So I mention all of this just to give you an idea of why some of
our needs may be different than needs in other centers. So as a consequence,
we’re always looking for new sources of data
that help to better meet the specific needs that
we have within CBER. My colleague, Dr. Williams,
is going to be talking in his presentations specifically
about our new initiative, the Biologics Effectiveness
and Safety program. And how we’re trying
to use that to address some needs that are specific to CBER. I’ll go ahead and stop there. Thank you very much. – Great, thanks very much. And next is Alan, and I
think you need the clicker. – Thank you, so I’m gonna build on some of the introductory
remarks made before lunch, by Azadeh Shoaibi,
regarding the BEST effort. Particularly the innovative
methods work that’s being done. And I’m gonna approach this from a specific use case related to efficacy and safety for
blood transfusion recipients, otherwise known as hemovigilance. For several reasons,
hemovigilance in the US, despite some very dedicated
efforts through the years, has been very difficult to
obtain in a reliable fashion. Several reasons are exposure data are difficult to obtain. I know there has been work
done with claims coding related to blood transfusion,
but for various reasons, which I won’t go into,
estimates in the literature are from 40 to 70%
sensitivity of exposure codes related to blood transfusion. Specificity is good, but it’s illegal to not be good at that, so. Codes for exposures remain something that either needs to be improved, or we need to find a better way to go about it. And then outcome data,
similarly, are difficult simply because these patients
who have received transfusions have many other medical things going on, and it’s sometimes
times tough to tease out symptomatology and lab
findings that are specifically related to a transfusion exposure. Why to do this? Not only for surveillance
observational purposes, but for quality purposes,
the blood community would very much like to
benchmark between institutions and be able to evaluate their own work. And also several of these adverse events are amenable to
intervention and, of course, intervention evaluation is very important. The numbers are shown here,
basically there are close to 18 million transfusions per year, it’s kind of a shrinking industry, because of better blood
utilization programs. But to bring it home, probably
40 to 70% of the folks in this room will receive
a blood transfusion at some time in their lives. So it’s an important issue to study. Manufacturing steps are many,
and some are listed here. Exposure codes, when they are available, typically only list the blood product, and some very basic manufacturing
steps, nothing else. Whereas some of the ISBT
codes that Azadeh mentioned, and I’ll go into in a little bit, have much more granularity to them, because they’re actually labeling
codes for the blood unit. Most notably, at the bottom of the slide, I’m not referring right now to
transfusable blood products, which are derivatives
made from blood plasma, pretty much as a pharmaceutical product. So hemovigilance in the US,
there are about 14 blood component recipient
adverse events recognized. They comprise hemolytic, non-hemolytic, infectious, and
unclassified as categories. Severe adverse outcomes are uncommon, but they rarely can be fatal. And when you multiply it by 18 million, the numbers start to look important. The ISBT working party on hemovigilance has put forward transfusion reaction case definitions, which are published. And these have formed the basis for many of the hemovigilance efforts
that have been undertaken. Some of the pressures
that make it difficult to conduct hemovigilance
in the US is, number one, probably, we don’t have
a national health system. And many of the things that
we’re talking about today is because we don’t have commonality between our data systems
and interoperability. So that remains a major problem. But the blood community
is a small industry, limited resources, transfusions recipients often had serious underlying conditions. Each transfusion component,
unlike a pharmaceutical, is an individual lot,
they’re not made in batch. So each product has its own
biologic characteristics. And as mentioned, the
availability and validity of exposure and outcome
data has been challenging. So the Sentinel BEST program
right now has been working, as Azadeh mentioned, to use
ISBT-128 blood component labeling codes as exposure variables. These are machine readable,
as required by the FDA. And they do provide accurate and detailed descriptions of transfused components. Importantly, these codes, at
least in our experience so far, have not been located in the EHR. You actually have to get to the blood bank and download their database
to find these codes. And as mentioned by Azadeh,
they’ve been incorporated already into the OMOP Common Data Model. The Common Data Model itself, I think, has been really well
described, the OMOP model. I think an important thing to mention here is using something like data mining, you often need an
iterative process to look at the variables that you describe, see how they play out and modify them. And a really rapid analyses
is important to that process. And OMOP seems to support this very well. Adverse even data is difficult to come by. And the effort within BEST
is to use data mining, specifically NLP at this
point, to use a combination of text data, diagnostic
codes, condition codes, and the combination thereof to define high-quality exposures from the charts. Why is this important? I’ll demonstrate this by showing a couple slides from a validity study that was done by the AABB,
which is our major US blood banking professional organization. CDC, the Centers for
Disease Control Prevention, and the AABB has been
working for close to a decade on a national hemovigilance program. And this is a voluntary
collaborative effort. The design is based around the ISBT working hemovigilance definitions. The study has uniquely has the denominator available, which is a big asset. The problem so far is out
of the several thousand transfusing hospitals in the US, only a little over 200
are currently enrolled, and that’s because of some
of the pressures I mentioned. That difficultly in generating these data and keeping up with an
active reporting system. So AABB conducted a
validation study within the NHSN Hemovigilance
module that was published in Transfusion in 2014. It used the standardized
hemovigilance case definitions. And it was conducted
by some of the founders of the NHS and Hemovigilance Study, who assembled 36 fictitious
post-transfusion AE cases from 12 different diagnostic categories. These were sent out by
survey to 22 different academic medical centers, of whom 50% were participant in NHSN
Hemovigilance Program. They asked for case diagnostics as well as parameters related to
severity and imputability with respect to transfusion linkage. And then the concordance was determined by an expert assessor. Interesting that despite this
well-controlled situation of active hemovigilance, the
case diagnoses concordance ranged from 36% to 72%,
and 36% coming from transfusion-associated
circulatory overload, which can be fatal. I think that reflects the
difficultly in trying to find diagnostic outcome measures
which can be applied for any sort of hemovigilance measure. But also, some of the difficulties
in assigning diagnostic codes just within a typical medical chart. Interestingly, there we
no differences observed between reports from NHSN
participants and non-participants. So the training at that level did not seem to make a difference. So within the BEST
program, our collaborators, and I wanna mention OHDSI and IQVIA, Christian Reich and Patrick Ryan are here representing both OHDSI,
and Christian’s with IQVIA. And we’re working with
John Duke at Georgia Tech, who’s actually doing a
natural language processing. So we’re attempting to apply these methods to electronic health
records, use the mining to extract outcome
information from the EHR text, apply it to the ISBT-128 information and try to improve the overall process. The focus right now is bacterial sepsis, transfusion-associated
circulatory overload and TRALI, transfusion-related
acute lung injury. One reason is that all three
of these are really amenable to interventions that can be measured. So we’re hopeful that what this process, within a short period of
time, we’ll be able to move forward and improve
hemovigilance overall in the US. And we have, I look
forward to extending this to other CBER products in future years. Thank you.
– Great, Alan, thank you very much, that
covered a lot of ground there. And next is Darren. – So I’m gonna talk about some of the work that we’ve been doing to
try to analyze information from the sample of individuals
that have collected into multiple data sources,
but the plot twist here is without physically pooling
the data sources together. So we all now that our data
are now being collected in multiple data sources,
and these sources complement each other. If we are able to analyze
these databases together, we will be able to get better information and generate more valid evidence. So this is a very typical setting, if you multiple database environment. You have multiple data contributing sites. And one of them can serve
as the analysis center. So one way to do this is to ask everyone to create a database and then send their database to the analysis center. In my personal experience,
this has never happened before. So we have to go with another approach, which is to try to process
your data a little bit further at the site level so that,
in the end, what you are combining is a study-specific
individual-level data set. And what we are able to do now
is that the individual-level data set can be freed of any
identifiable information. So this is a very typical
data set that you see in this type of environment. One observation would be one patient. One column will be one variable. So in the setting that we work with, we might be talking about
hundreds of thousands of rows and maybe a few dozen or so columns. So the thing that we have
to balance is the data that we think we need to do
the analysis that we wanted, and what data partners are
able or willing to share. So as of today, there are
still a lot of concerns about sharing de-identified
patient-level data. I won’t go into detail on all of them. I will say one things
that even if data partners are willing to collaborate,
sometimes they do have restrictions about the type
of information they can share. Maybe because of the contractual agreement they have with the members or the patients that would not allow them to share patient-level data for secondary purposes like public health
surveillance or research. Sometimes the paperwork
you have to go through might just be too tedious
for analysis to be done in a very timely fashion,
so we have to come up with other ways to share information. So this is what we are
trying to do in Sentinel. So we try to take a step
further to process the data even further so that by
the time we get the data, it will all be summary-level. In the past workshops,
I think that we have talked about some of the
analysis that we have completed. So we are now able to do very
robust multi-variable adjust analysis using only
summary-level information. So how do we sort of use this principle to analyze this newer environment, in which you had two or more data sources and then they contain
information from the same set of individuals, but each data source has slightly different information. So let’s simplify with
setting a little bit, let’s say that we have two databases. One is from the claims data partner, the other is from an EHR data partner. You can replace these with
any other data sources. The concept is the same. Let’s say that we are
able to say that, okay, we have the same set of individuals. And each of these databases sort of have different information about the patient. A very intuitive way, which is
still the commonly practiced approach is to ask the
data sources to share the patient-level with
you, so that in the end, you have what you have in the bottom, a very nice patient-level data that will allow you to do the analysis. But what if data partners are not able to share patient-level data? Can we still analyze the data to get the evidence that we
wanted in that setting? So this is where some of
these newer methods come in. So the approach that we are
developing or refining is actually a suite of methods
called distributed regression. So at a highest level, it is just like any ordinary least square regression analysis that we are used to doing
with patient-level data. But again, the trick here is
that these types of methods can do the analysis
without actually pooling the databases in the centralized location. And the way it does that is by sharing what we call intermediate statistics, that you are going to see in a minute. And based what people have
written down on paper, these methods do follow the
same computation pathway as a typical patient-level data analysis. So in theory, this should
give you identical results to what you will get from a
patient-level data analysis. So this is what I mean by having the same computation process. So many of us run
regression analysis using statistical software
like R, or SAS, or Stata, and we almost always input
a patient-level data set. And there are things that go behind the scenes that we don’t see. So if you use SAS, once you
say I’m going to run PROG REG, for example, using a
patient-level data set. SAS or R, they do
something behind the scene that we don’t see, and one
of the things that it do is to create some sort
of summary-level data set that you are seeing in
the middle of this figure. What we usually see is something on the right-hand side
as the final results. Everything that goes behind
the scene is hidden from us, but for us this distributed
regression methods, there are trying to
actually open the black box, if you will, by specifically using this intermediate statistic. And the nice thing about
it is that it is not patient-level data and
it is less identifiable. But at the same time, it
follows the same pathway in the computation process. So you should get identical results. So instead of sharing this,
what we are going to ask the data partners to share,
either with each other, or with a third like an analysis center, is something you see at the bottom, which is just summary-level information. So this is some of the
work that we are doing. So it’s just from any of the
databases that you could use. So these are the result
from the traditional patient-level data analysis. These are the result from
the distributed regressions. So these are the numerical differences. So you can see that are quite similar, different maybe at the fourth
or more distant decimal place. It is possible to get even closer results. So in some of the settings,
we were able to get results that were identical to the
16th or 17th decimal place. So for all intents and purposes,
they are identical results. But distributed regression
do this type of analysis without patient-level data. The reason why you have not
seen these methods in real life is because to do this type of analysis, you actually require multiple iterations. So the information would need to be transferred back and forth between the data contributing sites
and the analysis center. This is just the way that
regression models work. So the thing that we have to do is not just to do the
analysis, but also to allow these type of information exchange to be sort of semi-automated,
or fully automated, so that it is practical in networks like Sentinel or PCORnet. It is possible to do this manually, but you have to people
sitting in front of computer to do this back and forth manually, and it’s just going to be too tedious for this to be practical. So were are developing this automation, in PopMedNet, which is
the data sharing platform that Sentinel, and PCORnet,
and other networks are using. So this is where we are. So in terms of the statistical code that is doing the
computation, we have some functional prototype in both R and SAS. We are continuing to include
more secure protocols, so that it’s less identifiable
and more privacy-protecting. And in terms of communication code, so we also have a functional prototype in PopMedNet that will allow the user to specify different levels of automation. So it can be fully manual
or completely automated, depending on their preference. The final thing I will say
is that for all the linkage work that you have to do,
you need a global key, which is an ID or some sort of hashed ID that you would need to say
this person in this database is the same person in that database. There’s no way to get around that. But for distributed regression,
that will be the only piece of patient-level
information that you will share. You don’t need to share anything else. And it’s sort of doing
this on a virtual linkage to do the analysis that you wanted. So in conclusion, we think
it is possible to use these methods to analyze
data from multiple sources. And we are trying to reduce
the barrier to data sharing by using these newer
privacy-protecting methods, thank you. – [Mark] Great, thanks very much, Darren. And next is Marianthi.
– I thought it was Josh. – Josh.
– Oh sorry, next is Josh. It’s Josh, I’m sorry, got
ahead of myself. (chuckles) Thanks, Josh, please go ahead. – So I’ve been tasked with, in 10 minutes, giving you a very high-level overview of the rapidly expanding field. So sticking with the theme
of better, cheaper, faster, I’m gonna go quickly. I will say that if my flight
back to Boston is canceled, I’m happy to stick around and provide more details on any of this. It is high-level,
everything that I’ll present is in the public domain,
references are provided on the slides, if you wanna
take a deeper dive on anything. So I think in thinking
about how to distill this into 10 minutes, a good
place to start is this recent review paper that came out of Bordeaux that attempted to classify and describe the whole host of signal detection methods that can be applied to the
types of longitudinal data that we’ve been talking about today. I’m not gonna go into
all the details of this, the tables here are
just to kind of show you that there are lots of different methods that have been proposed, used,
and described in this paper. They are very heterogeneous. So some methods are your
typical pharmacoepidemiologic study designs, cohort-type approaches, self-controlled designs. Others are methods that have been borrowed from the spontaneous adverse
event reporting world and brought into
longitudinal healthcare data. And then there are some
that I wouldn’t necessarily classify as being signal
detection methods inherently, but they are approaches
to doing multiple testing, either sequentially over time, or across a large space of potential signals. And one of these methods,
which is at the bottom there, for anyone in the front who can see it, is the TreeScan approach, which I’m gonna come back to in a few moments. When we think about the
signal detection methods that we could apply to
electronic healthcare data, it’s important, I think,
to start with asking what we’re asking of the methods. And fundamentally, we’re asking
them to screen potentially many drugs and potentially
many outcomes to tell us what are signals that
merit further attention. And these are, the
opportunity for signals, when we think about how
big the Sentinel data is, and we heard Rich earlier
today say that the Sentinel distributed data network is
nearing 300 million patients, with the addition of the Medicare data. These big data, and when
we talk about big data, Sentinel is big data, represent a huge opportunity for false positives. So the signal detection
algorithms need to tell us what to look out for,
while at the same time, trying to filter what are
the true causal relations, those cause and effect
signals that we care about, from signals that are arising
from chance or from bias. And chance and bias are things
that we deal with every day, in pharmacoepidemiology,
not just in the signal detection framework, but also in any hypothesis-driven study that we do. And as such, the methods to
filter out chance and bias are very similar to the
methods that we might think about in hypothesis-driven studies. They’re epidemiologic methods,
they’re statistical methods, and they require, as well, clinical input. In thinking now about how
to bring these methods into a distributed data
environment in big data, I like to think of the
methods as having two fundamental features that are sort of core to an effective signal
detection algorithm. One is akin to the design. So all of these methods,
in order to be effective, are producing observed
and expected counts, which is sort of the
basis of epidemiology. We’re always comparing observed
and expected in some way. And it can do that via any
design, the cohort approaches, the self-controlled approaches. And again, I think of this
aspect of the signal detection methods as the design component,
which is really intended to filter out those signals
that could be due to bias. The second key component is some process to then prioritize or rank
signals in some attempt to separate out those signals
now that are due to chance. And there’s lots of
different ways of doing this. And I think of this as being
akin to the analysis component of a study, where we could
use an alpha-based approached, and the TreeScan approach
that I’ll talk about uses this within a frequentist framework, to tell us what our statistically
significant signals, while adjusting for all of the multiple testing that’s happening. But there are other approaches as well. Bayesian approaches,
and there are approaches that should and can
consider clinical factors that should determine
what signals we should prioritize for further followup. Or biological plausibility as well. And I think those factors
shouldn’t be lost, as we’re thinking about this. Now going into the data,
we have these methods. And there’s a couple
of different paradigms that we can think about
in casting these methods within the data that we’ve
been talking about today. And I think that these
paradigms exist along a continuum, where we’ve
heard a lot earlier today about more targeted
pre-specified analyses, where we have a single
drug or medical product and we have a single outcome
or some finite set of outcomes that we’re interested in, in
specifying and evaluating. And this, for the most
part, is where Sentinel has focused early on, and in
particular, with Mini-Sentinel. That’s largely where the activity was. There’s also at the
bottom what I would call a completely hypothesis-free space, an all-by-all type screening approach, where you could, in theory,
think of an approach that would screen every possible
medical product exposure and outcome, and do this
all-by-all activity. I’m gonna actually, I have
a couple of slides here I’m gonna just fast-forward through, just because I think there have been a couple of examples of targeted analyses that have been presented
throughout the day. And since, I wanna make
sure we have plenty of time for discussion, I’m just gonna
quickly go through these. But these are examples, that
again, are in the literature, and also on the Sentinel
webpage, that have done the targeted sort of assessments, both in a retrospective as
well as prospective way. And where Sentinel is sort of moving in the signal detection spaces now, thinking about how do we do
this kind of middle activity, where it’s not quite the
all-by-all screening, but it takes either a drug
or outcome perspective. So you might start with
a single drug and say, is this drug associated
with any potential outcome that we can observe in the data? So it scans across all potential outcomes for a single drug, or you
could flip that around and say, we’re interested in one
very specific outcome, can we scan across all drugs to determine whether there’s any specific drug or drugs that may be associated with that outcome? And again, this is kind
of where we have focused more recently in Sentinel. And one way of doing
this, and this was on that table from the review paper
that I started out with, is this TreeScan methodology. And I’m gonna give you just
a very high-level overview of what TreeScan is, but
to the second bullet point, one of the really appealing
features of TreeScan is that it’s agnostic
to where we get those observed and expected counts. So it’s compatible with
essentially any design component of a signal detection that we
might consider, with cohort approaches as well as
self-controlled approaches. It uses for, what I
called the analysis piece, or analysis component, a formal
multiple testing framework to account for the fact
that it’s making repeated tests across lots of different outcomes. And as you’ll see in a moment, across lots of different
levels of outcomes. So it’s adjusting for this
multiple testing, and in a very formal way that not all
signal detection methods do. And then at the top, the
third important feature of TreeScan is that it
uses a tree-based structure to do the scanning across outcomes. So I’m really focused
here on that approach in the middle that focuses
on the drug perspective, taking a single drug or medical product and looking across multiple outcomes. Here’s one example of what
a tree might look like, across which we can scan. So this is the MLCCS
structure, it has four levels. At the top, there are 18 body systems. And then all the way down at the bottom, at the fourth level, and you
don’t need to see the detail here, it’s the sort of
structure that’s important, are our ICD codes that are
then what we see in the data that we usually use to
measure our outcomes. And they are rolled up into
these higher-level terms. And then further rolled
into these body systems. And what TreeScan does,
at a very high level, is it allows you to test both across all of these nodes at any of these levels. So if you wanted to test
across all fourth level nodes, you could to that. It also tests across each of the levels. And so it does this in
a way that it counts for the corelation in these
outcomes across these levels, and also the multiple
testing that’s happening to preserve the alpha. I’m gonna briefly talk about
what we’ve been working on, and when I say we, I’m saying
that generously, to myself. There’s been lots of people
who have been leading this, and I’m summarizing this work for them. There was a pilot study
done looking at the TreeScan approach with a self-controlled design. Specifically, a
self-controlled risk interval. And this is one that Katherine Yih led. And it looked at the HPV4
vaccine, or Gardasil, and scanned across 1.9
million doses of the vaccine and found only two signals. Again, the opportunity for false positives with almost two million vaccines is huge. And only two signals arose,
both of which were explainable, because they were known and labeled. Either for the HPV4
vaccine, or other vaccines that were co-administered on the same day. There’s now some work
that Judy Maro has led that’s wrapping up, that’s
taken the TreeScan approach, similar to this with the
self-controlled design, and moved it into the drug space. So there is a project,
again, that’s just finishing, that was looking at three example drugs. Again, using the self-controlled design. And the results of that
work are forthcoming. Often when we do analyses of drugs, whether it’s in signal detection or more kind of hypothesis-driven analyses, we like to think of using cohort designs, ’cause that resembles the randomized trial that we might otherwise have
done if that were feasible. And so there’s been some work done now, including work that Shirley Wang has led that has looked at how we can use TreeScan in combination with a
cohort-based approach with propensity scores to deal
with measured confounding. And this first project
that’s also just wrapping up looked at doing this within
a simulated environment. And so this was largely a
demonstration of efficacy that this could work, TreeScan
with a cohort approach specifically for drugs. And then the next step will be to examine this in more of an
effectiveness framework. So looking at actual empirical examples, similar to those that I
mentioned that were done for those three drugs for
the self-controlled design. So to wrap up, I wanted to highlight three key issues that I see as being important challenges
that we have to tackle, not just within Sentinel, but as a field if we are to push the envelope on doing signal detection in a meaningful way. One is that we’re now, in
Sentinel, and we’re broadly thinking about how to use
TreeScan in these signal detection methods in a cohort
framework where we have every possible outcome as
an outcome of interest. And we know that confounding
is outcome-specific, so we literally have to come up with a way to deal with confounding
for every single outcome. And this isn’t easy,
but I think we have some good ideas about how this could happen. Secondly, I think an
important way of bringing this back to this framework
of paradigms that I outlined is that we should also be thinking about how to do this in a prospective way. So as new drugs come into the market, we can do this TreeScan or
signal detection type approach prospectively to identify any
serious concerns early on. And then finally, Bob mentioned
at the start of this session that this issue of data
reuse is an important one. And it’s an old issue that
continues to be debated, and rightfully so, it’s
a complicated issue that has lots of differing opinions. And it’s a challenge that
we need to think through about how to prudently use
signal detection methods within these big data systems. And I’m not gonna go into details on this unless others want to, but
I just want to leave you with those thoughts, and looking
forward to the discussion. – Great, Josh, thanks very much. Again, you’re all
covering a lot of ground. Joanne?
– Thank you so much. And thank you for the opportunity
today to provide comments. You know, Sentinel has
been such an important contribution to public
health over the past 10 years since it’s been created. Especially as we’ve been
talking about today, to answer really important
safety questions. And with very high-quality
evidence and data. I think it’s, you know,
you asked to think about the next few years and
also the next 10 years, so I think it’s really great. And we’ve heard today
that the next few years will see some more
automation, thinking about AI, machine learning, natural
language processing. I think that’s incredibly
important and will add a lot of value, especially
with the goal that we’ve heard a few times today of
better, faster, cheaper. And I think that Sentinel
has achieved that, and I think that there’s
more to come with a lot of those new frontiers that
we’re all getting into. I think as I think
about the next 10 years, I think about the mission of Sentinel. And the critical importance
that safety has played, but I also think about a
learning healthcare system, which I know is mission that
all of us are engaged in. And I think of the important
role that Sentinel could play given the expertise, the data sources, and all of the new
developments, and protocols, and programs, and contribution to science that Sentinel has and will make. And what it can do for a learning healthcare system of the future. So just thinking about that,
as I go through my comments, first of all, we talked about
additional sources of data. And I know that that will be one of the focus areas for Sentinel. And one of the things we might
think about in the future, again, I’m talking
about the next 10 years, is what we now call kind
of patient-centered data. And I know most of you
probably saw the announcement from Apple that they will
be having one of the apps on their phone aggregating
patient-level data. The patients’ own data from
multiple sources, aggregated. And there’s also another
company called Hugo that is already doing that,
I have the app on my phone. And really, the current
electronic health record system that we evolved into in the United States really enables that, so
that patients can see all of their data, irrespective
of the healthcare system, or where the labs are
done, or what hospital they happened to be visiting. And I think that will
empower and enable research in the future focused on the patient, irrespective of where
that patient is treated. So I think that might be
a frontier for Sentinel to consider in the future,
as another source of data. Second, new types of studies. And I was very excited
to hear about Sentinel getting into cluster randomized studies. And especially the IMPACT-AFib study, which I think is extremely impressive. And just to think about how
quickly technology is going, you’re using patients who are
treated with AFib already, that have that in their record, and you’re reaching out to them. Well, we all know that
it’s not the future, but now patients can
self-diagnose AFib, right, by putting their fingers on a new device that’s coming out on the
iPhone or on the Android. And so this opens up a
whole new level of research really centered around patients and their own diagnosis via technology. Okay, so in addition, with
all of the tremendous work that’s been done with Sentinel, to share some of the programs, and the work, and the best methodology that can be used with
those outside of Sentinel. And I think to really leverage and amplify on the learnings that have
been gained with the work in Sentinel, and with others
who have worked with FDA, and successfully navigated
observational research protocols. I think it would be great
if there were kind of a living library or a catalog of sorts that could be used to give guidance, with all of the caveats
that every new study, every situation is different. But when there is a data
set, or there are outcomes, that have been vetted and
approved through Sentinel, and approved by the FDA for use, or through other
organizations, other companies, that have had something vetted by the FDA. It would be great if there
were a living library that could be used and
continually added to, so that when people start new protocols, they could start from
outcomes and other parameters that have already been
approved by the FDA. It could be a great public resource. And it could help both
researchers, companies, as well as FDA, because
there would be less back and forth and negotiation
on those kind of parameters. Let me shift a bit. So from Alan’s talk, and
from Steven this morning, I am encouraged to see
FDA actively applying multiple alternative
strategies to meet their needs, including the use of the
OMOP Common Data Model on OHDSI as part of CBER’s BEST program. And I think that
development of new methods in BEST in the area of
natural language processing and machine learning could be
very valuable into the future. I’m also very excited
about Josh’s presentation. And to see that Sentinel’s
exploring innovative approaches for signal detection
and signal generation. I wanted to stop and say
that Dr. Gottlieb started the whole day today, and
I wrote down his quote, saying that in 2003, he
already wrote, for you, Mark, that, “Is there a way we
move away from MedWatch?” And that was in 2003. And so the answer, of
course, right now is no. There is, for serious adverse events. But I do think with the
approaches that have been presented today by Josh
and some other work that’s going on, even in our shop in J&J, with OHDSI, et cetera, there are methods that will we need to work together on to be able to do better
signal detection in the future using real-world data,
the real-world data. Rather than under-reported
spontaneous adverse events, which, as we know, is the basis of our signal detection strategy right now. I actually feel very strongly that we need to start working towards
this very quickly. And doing this together. And there are many reasons why. The first one is the most important one, and I don’t have to tell this audience, it’s the public health perspective. But second, the data,
in general, big data, are becoming widely available,
whether it’s EHR data, PatientsLikeMe, as I talked about, apps that diagnose AFib, et cetera. And we are not the only
ones looking at this data. There are technology
companies that don’t have the expertise that’s been
built over the past 10 years, who are analyzing these data. But I think it would be
best if it’s the group here, in this room, the brains in this room, and all of the people
who have been working on these types of data
over the past 10 years, to work together and
understand the best possible approaches for signal detection. Rather than tech companies
doing this in isolation. Finally, I think we do need
to think about efficiency. Sometimes, we need to think
about what we can we stop doing, and what can we do more of? Some of you have hear me
talk about this already, so I know I’m saying something
that is not very popular, but can we look also at
non-serious adverse events? Which actually take a
tremendous amount of resources, hundreds if not thousands
of people’s worth of time and millions of dollars. Imagine if we could have
a more efficient way to look at real-world data for
non-serious adverse events, rather than the way we look at now, which involves reporting,
documentation submission, quality analysis,
interpretation, et cetera. And maybe we can learn from
non-serious adverse events and that will give us a whole
nother way of understanding how in the future we can look
at serious adverse events. And this is where we do
have some work to do. And of course, we cannot
stop doing adverse event reporting for serious adverse events. So let me just close by adding one more thought and one more point. And that is something that
Darren Toh touched on. And highlighted a very interesting way to do analyses across a data network without actually sharing
patient-level data. And this raises the whole
point of the issues that have come up with sharing, or sharing
access, to real-world data. I think it behooves us
as a community to look at some of the lessons that have been learned with sharing clinical trial data, which went through several
years of very tough discussions, With data holders, just
as have data holders with real-world data, there
are certainly, of course, data holders of clinical
trial data, including J&J. And many years of
discussion and negotiation, and a really excellent
IOM, or as we say, NASEM, report came out which really
outlined a very reasonable path to what they call
responsible sharing of data, which could maximize the benefits and minimize the risks of data sharing. And there are a lot of
lessons to be learned from those models that were laid out. And from where we are now,
where data are shared. But in several different models. There are models where
data holders actually share their data, they give it to
other researchers to analyze. There are models where some researchers put their data up on the internet. But there are also models
which I think would be very amenable to the
governance that we’re all talking about here, where it’s
not the data that’s shared, but it’s the access to
the data that’s shared. And there’s governance structure. There’s an independent
reviewer, for example, to be sure that the researcher
who wants access to the data has a good research protocol
and is qualified researcher. I know there’s a, we’re involved
in J&J with my colleague, who’s here in the room, Joe
Ross and Harlan Krumholz. And their data sharing
initiative, which is called YODA. Again, it’s sharing
the access to the data, and Yale reviews all of
the research proposals that come in to be sure
that there’s good governance over the data and the analyses. Finally, one last benefit
where I think the Sentinel, Sentinel network, and all
the expertise would have tremendous value in the
learning healthcare system is really on predictive modeling. And feeding information that’s gathered from Sentinel into the healthcare system, and then following up
to look at the impact. I loved the presentation
today looking at morcellators. And what happened, and
being able to follow that the changes that the
FDA made to the labeling actually resulted in
decreased use of morcellators. What I would love to
see, in a real learning healthcare system, and I think Sentinel could really help in the future, and I’m sure it cannot be
done with the morcellator, because it’s such a small incidence. But imagine, with the morcellator,
identify we didn’t just look at the incidence
of that type of surgery, but we also then looked at,
did we actually decrease the risk of developing
sarcoma, of sarcoma, let’s say, in women, in the United States. Did that go down? Did the risk of infections go up? Did the risk of VTE go up? Those were all issues that
were raised when the discussion on what to do with the morcellator
issue was being vetted. And we still don’t know
the answer to that. But in a learning healthcare system, that’s the benefit of doing the analysis, looking at the impact in
the healthcare system, what’s going on in medical practice. Seeing whether we had the right impact, and then continually feeding
that back to the healthcare system, so it’s a learning
system for our public health. Thank you so much. – Joanne, thank you very much.
(audience clapping) Really appreciate the focus
on the long-term vision here. We’ll come back to that in the discussion. Marianthi?
– Yes, I would like to begin by thanking the three speakers for the very interesting
presentations that they gave, and to actually say that
I am really impressed by the amount and quality
of work that has been done in the Sentinel System. I come from an academic perspective, and I would actually
draw on the common themes that I saw in the three presentations. And I will take it a step further, in a plea of actually asking for being able to look, the Sentinel can look into real new development, development of new methods. A lot of discussion was done, and a lot of around the data, and data sources, and of
course, it is obvious that data is not only numbers,
like we knew it before, but it involve text, it involve images, laboratory data, socioeconomic data. Data come form very different
sources with the idea of actually supplementing
the profile of the patient. So combination from data sources brings up different issues
of scale and usability, and development of new methods as well. I was really excited
to see the presentation on blood transfusion,
actually, using text data. Computable text is really important, and offers a very important
element in supplementing the laboratory
characteristics of a patient. In terms of other aspects different from the other talks, it is very encouraging
to see TreeScan methods that are very well-developed
to actually be used in from a specific to broad setting. The setting in which
you’d use multiple drugs and multiple adverse events
is really interesting and important, as well as the privacy and confidentiality
aspects that were brought by Darren’s talk into the forefront here. And with that, what I would like to do is to put in context to
what I am saying here, which is when we look into
methodological research, there are two broad definitions. One direction goes into
transfer of technology from different fields, and perhaps
refining these technology. And the second is development
of fundamental ideas and methods that actually shed light onto things of importance. And then they can be fed back into the scientific enterprise in
order to actually do better adverse event identification
in this particular context. Both of these directions
are extremely important. Both of these directions actually offer quite a bit in the quest of delivering better patient care, and in the quest of actually having a
learning healthcare system. What I would like to
bring in the front here, in order of discussing in
the concept of evidence is the development of new methodology in terms of quantifying bias. There were simple examples
that I have put here, in terms of papers. One was a paper came out from the current OHDSI consortium. And the second came as a rebut to this by the authors Gruber,
and Tchetgen, Tchetgen. And basically, these indicate
a scientific discourse that really sheds light
onto the different issues that are at the forefront of our concerns. The second thing that I would like to bring back in the interest of time is other work here. But I’d like to put up this, a rendition of the evidence
generation pyramid. And everybody knows this thing, but I would like to single out, to give you an example here, and single out one specific layer of this evidence generation. It was shown that
amiodarone and sofosbuvir are dangerous, they are two drugs that they have dangerous
drug-drug interactions. And these are used in
hepatitis C virus infection. And for those of you who
don’t work in the field of hepatitis, there is some information that I have put down here. But my point is that the FDA put out a modification in the label being based on some case
reports that they were actually reported to them. So the additions were made
to warnings and precautions, and these were identified, these dangerous drug-drug interactions were identified from using the case reports. So here you see case reports
on the slippery slope. And my question is do they really need to be at the bottom of the pile? And if you are to use
them and make inference on the basis of those,
how would you do that? So what I am actually proposing here is this idea that that is being used, of the idea of the pattern,
the pattern identification. The pattern here, of course,
points to adverse event. There are steps, different steps, for pattern identification and validation. And so that kind of, almost the bottom of
the evidence generation pyramid really offers important
and interesting information. And so my questions
here is can we quantify systematic errors as an alternative to seeking to eliminate bias? And what are the important
components contributing to systematic errors
that should be assessed? And with that, I would like to stop. And I would like to say
that this audience here has the know-how, the ability, to address these and
more important questions in the quest of delivering
better patient care. And also better adverse
event identification. Thank you.
– Marianthi, thank you very much for that
thoughtful presentation. And I agree with your final point, that between the panel
and the audience here, and all of the very broad
range of collaborators in the Sentinel Initiative to date, that there is a lot of
potential for addressing future opportunities
like the ones this panel has raised, and like the ones you all have talked about all day today. And we’d like to spend a
little bit of time on that now. And it includes comments
for, questions from you all, relating to where Sentinel should go next. You’ve heard from the panel about both some specific short-term steps. Opportunities and challenges,
as well as a long-term vision. And I think we at 10th anniversary, we are at a stage where
we’re thinking about the next generation of
Sentinel, building on all of these achievements and
a trajectory of progress. It does remind me, back
in that, now several times referenced 2003 speech, where we did begin trying to talk about that
the need for an active surveillance system
and focusing on safety. But it was really about
using real-world data more comprehensively in FDA’s work. We weren’t the first ones to
think of that, nor the last. But the vision there was important. By the way, I’m not sure
that replace MedWatch made it into the final
version of that speech. The point was just that
there was a lot of need for augmenting some of the
limitations of MedWatch. It does seem like now is a good time for more about that vision
for the next 10 years. And how it can connect to some of the short-term opportunities. So I would like to talk
a little bit about that. And maybe picking up on a few key themes that have come up today. And maybe, again, would
encourage contributions from those of you in the,
who are joining today, about this topic, but let
me highlight three things. One is kind of next steps and
a longer-term vision on data. A second is next steps and
longer-term vision on methods, particularly around signal detection and effectiveness, not just safety. And third is how this
approach that FDA has taken, that I will tell you is
impacting, and has impacted, analytics and the capacity
to develop evidence from big data elsewhere
in the healthcare system. What are the next steps
for Sentinel helping to drive us to a true
learning healthcare system? There’s been a lot of
discussion of big data in recent years, but
probably that should be more focused on what you
can actually do with it. Can we actually turn it into big evidence that really is useful
for improving the health of our population and
people around the world? So back to starting with data. A lot of discussion today
around the concept of, Bob, you used the term
computable phenotypes. Sort of richer ability to
describe particular kinds of individuals and what happens to them. Key to that is the types of data linkages that came up in several
presentations this afternoon, Darren, Alan, you all talked about this. So maybe you could start with a question of whether there is
more that could be done on supporting the kinds of data
linkages that you described. There’s been a lot of talk, for example, about using hash methods,
maybe more generally. Darren, than you talked
about in your approach, also concepts like
blockchain, somebody had to talk about it today, I guess. But that could potentially permit more use of truly distributed data systems. Alan, you talked about bringing together a lot of different kinds of data with natural language processing. Any further thoughts from you
all, or others on the panel, about how to accelerate progress
on linking data together to support Sentinel in related activities? Bob?
– I’ll start. So whenever I ask this
question to Rich Platt, he reminds me that the 10-year
vision is actually very easy. It’s all data is linked and
then we just analyze it. But it’s the incremental step we take to get there that are important. And since our focus is very much the sort of what we’re
doing in the next steps to meet our PDUFA VI
commitments in this regard, I’ll just briefly talk
about that, and let others maybe talk about some of
the more visionary things. I think one of the
complexities that I alluded to in my opening remarks was that doing this in a distributed network,
where there’s lots of different data partners with lots of
different access to data. So we have integrated data
systems that have EHRs. Well, so that’s, they’re already linked. But what do you do with
the claims data systems that have to reach back
to the medical records, maybe are electronic, maybe
they’re not electronic. So there’s a lot of pragmatic
steps and we’re working very hard to make that
process more efficient. But then I think, when
you get, what you can, in the short term, start doing some more of these modern technologies. You can’t do NLP unless you
have machine readable records. So can you take those paper records, or the electronic records
that are available, and apply NLP to them in some
kind of very limited way, initially, maybe to extract
some pieces of information before doing full computable phenotypes. We have done some pilot work doing that. And then, but with machine learning, we could take a corpus
of charts which have been expert adjudicated and
classified and then seeing if machine learning or other technologies can do a better job
with the data that’s in the common data model to
predict that classification. So there’s are some of
the sort of next steps that we’re thinking of. – [Mark] Great, thank you. Other comments?
– Yeah, I’d just like to build on some of the
things that Bob has said. Because actually our two
offices have been working together on some of these projects. We have had some real success
within Office of Biostatistics and Epidemiology using
natural language processing to help build tools to
help our medical officers review the large number
of adverse event reports that come in more efficiently. So using the NLP as a first step to pick out some of the important elements within the narrative
portion of adverse events. And so I think there is
real promise in terms of being able to use some of
those methods in other areas. And I liked Marianthi’s
comment about computable text being a concept to think about here. And again, I think Bob has done a lot of work on innovative ways to help develop and improve case definitions. My office has collaborated
on some of that, and I think that’s very promising. – [Mark] Any other comments
on this topic, yeah? – One thing that I’ve
started to think about, and I’m really quite
new to this whole area of natural language processing, is that trying to back into
it by reading key terms within the text is a little
different from having an attending physician that’s
been treating a patient and says this is this, I saw
a case of this two years ago. This is what it is. You’re missing the
physician judgment in doing the NLP process to the extent that it’s not documented that way in the note. So that’s one of the potential hurdles in being accurate with that method. – Yeah, so great comments. And I want to come back to an issue that, well, both Bob and Joanne,
maybe put them together, short and long term, the idea, Bob, is to get to having the data that you need to do sophisticated computable phenotypes with reliable validated outcome measures. And then just run the analysis. We’re gonna come back to
the analysis in a minute, but we’re clearly not there yet. And I wonder, in light
of Joanne’s comments about moving from sort of
effectively a provider-owned fragmented data towards more
patient-controlled data, that is a long-term trend that many, including Apple and others, see as part of the healthcare system of the future. But to your point, what
are the short-term steps that can help get there? We did have some
discussion today about some patient-generated data and
some comments from people here about how patients could
get more actively engaged in contributing to Sentinel. Are there some short-term steps that would help in this direction? I guess just one more
editorial comment here. In putting data together, there
are all kinds of technical issues around linkages,
linkage methods to protect confidentiality and proprietary
concerns, validation issues, a lot of things in the technical category. But there are also another set of issues which I call governance
that came up today, too, about just encouraging and having the support for getting this done. And it does seem like
getting patients even more engaged directly in these
efforts might help with that. Any thoughts about initial
or next steps on getting more patient engagement,
more patient-generated data? – Well, the way that
patients now have access to their data electronically, right, because the changes in the law and everything we’ve gone
through, through EHR. That’s been one of the
major benefits that we have. And that really enables
Apple, and Hugo, and others to aggregate data based on a patient. And so a short-term thing
that we’re looking at is, just like when you accept FitBit,
you say yes to everything. Most of the time you don’t read it, but hopefully you do read it sometimes. So if you do bring all of
a patient’s data together, and a patient is aggregating
his or her own data, you could ask them, are
you willing to contribute, and Hugo does this, are
you willing to contribute your data to research, or
advancing public health, or whatever the right
appropriate approval is. And if you do that, the patient
is basically volunteering to have his or her data
used in a passive way. They don’t have to keep consenting, right, it’s a passive thing. Their data could be aggregated. So it could be very quickly, I don’t know how quickly these things will get adopted. But based on some of the
trends that we’re seeing, it could be that very
quickly, a lot of people do consent to have their data aggregated and used for the things that we use the data holders for. The data companies, this could
be another source of data. Of course, with a lot of overlap, right, ’cause all these people
are in health systems. But this could be a
source of doing studies, doing observational research. – Other comments? It sounds like a good vision. And I would like to, yeah,
I do wanna open this up to comments here, so please do
come on up to the microphone. – [Audience Member] Mine
is particularly directed. I mean, part of the question
you’re asking right now. Which is, I wonder how many data partners have been considering revising their consent agreements with their patients. Because more and more patients are saying, yeah, take my records and use them. Are data partners thinking about institutionalizing that request? And letting patients opt
in or out right there, when they’re signing up to
use that healthcare provider? And so just take the idea and run with it, if your institution
isn’t doing it already. – Thank you, thanks for the comment. We would like more like
that, here and afterwards, as the Sentinel work goes forward. I’m gonna switch gears
a little bit to methods. So assume we keep making
progress on these data reliability and linkage issues, we had a lot of discussion
today about signal detection. And if there are any
further comments on that, based on what you’ve heard today, I’d like to hear them from the panel. But also, this notion of going beyond detecting safety signals,
maybe eventually, at the sort of all-by-all,
if we can work out the methods issues that
have been raised there. But also questions about effectiveness. And those have typically been harder. The effect sizes that you’re looking for are typically smaller and
may be a different way of looking at outcome or
effectiveness benefits verus safety questions so far. There have been, as
we’ve talked about today, a number of applications
of the Sentinel Initiative to effectiveness studies. Often these need randomization. And there have been
steps taken to do that, cluster randomization and so forth. I’d really appreciate
the views of the panel about further steps that
could help encourage appropriate and potentially
very useful studies using this kind of
infrastructure for questions related to effectiveness. Maybe a label extension study, there’s been a bit of that done already. Maybe other types of applications. Any thoughts, please go ahead, Alan. – Rich.
– I’m sorry, Rich. – Yes, there’s a mess up on. (laughs) So yeah, I can speak a little bit to some of the effectiveness studies because my office has done some work on vaccine effectiveness
studies using data from the Centers for Medicare
and Medicaid Services. And we’ve got a number of
these that have already been published, if people
want to reach out to me, I can share that information later. But we have done some
work attempting to look at the relative effectiveness of high-dose influenza vaccines versus standard dose. Just a little bit of background, several years ago, an influenza vaccine with four times the antigen level was approved for use in
people 65 years or older. And so we have done some studies trying to generate real-world evidence about whether that is more
effective at preventing flu. But more importantly, trying
to get at some questions that would be difficult to
answer using randomized studies. So for example, one of our papers looked at hospitalizations, and
another looked at mortality following influenza to try and determine whether we did have greater effectiveness. The results have varied
a little bit by season, but certainly in some seasons, we have seen greater effectiveness in the observational data
for the high-dose vaccine. And we’ve also published
a paper looking at the, how long the Zostavax,
herpes zoster vaccine to prevent shingles, how long
that effectiveness lasts. So I think we’re in the early stages of figuring out how to use
these studies for effectiveness. And also how to then take
the data that’s generated from those studies and use it
in our regulatory framework. But I did just want to mention
that we have some early experience in trying to
do some of those studies. – Mark, I just wanted to add one thought. Which is that you
mentioned pragmatic trials, and the opportunity to
do those within Sentinel. We’re involved in our
division outside of Sentinel, in activities trying to link randomized trial to claims data. One of the opportunities is
that you have longer followup for patients beyond the trial
duration in the claims data. One of the big challenges
is that for any trial, a small proportion come
from any one data partner. Any one insurance system. But imagine doing that in Sentinel, where you have so many data partners that cover a large portion
of the US population. There, I think we could do some really interesting work around
effectiveness, right. Because if the treatments are randomized, then we can follow them up
using Sentinel-like data, we could think about other
outcomes that weren’t examined in the trial to
really use claims data, in this case, to look at
effectiveness in ways that would be difficult in just
an observational study work. Confounding for effectiveness might be much stronger than it is for safety. – [Mark] All right,
thanks, Josh, for that. Bob?
– I wanted to comment on the signal detection
part of your question. – Okay, that’s fine, too. – And I wanted to refer to
Marianthi’s presentation, and her comments about the case-based approach and pattern discovery. And I referred to using machine learning from the, with a essentially
supervised machine learning, to identify cases, but
I think what Marianthi is referring to, and I’d to
maybe ask her to comment on it, is a very different way of thinking about how we do this signal identification, where most of the methods we rely on now rely on very specific
codes in a coding system, or the TreeScan method relies on the tree structure of the ICD-9 codes. But there’s a, I think what
Marianthi’s referring to is a more general statistical approach, that would allows us to
identify patterns in the codes, and really open up the idea
of signal identification. And not have it be so restricted. I don’t know, Marianthi,
did I get that right? – Yes, actually, thanks
for commenting on this. I think one of the side effects, so to speak, if you want, of Sentinel, is to actually push you to think harder about the
foundational questions that are surrounding, the
very important questions, that they are studying. And so this comes from the way, and my singling out the, basically, by looking at what the
Sentinel has been done, and by looking at how the FDA acts when it takes regulatory actions. And I’m not putting this up, I didn’t put the pyramid,
the rendition of the pyramid of evidence in order to say scratch it. No, but in order to single
a certain portion of it, where it resides, really
towards the bottom. And that is the case series. And ask where it should be put and how can you put it. That we are far away
from actually doing that. But this is a question
of addressing fundamental foundational issue as to
really having definitions. And valid definitions of
what adverse events might be. How to find out the, in this sea of data that is being used, local patterns in the data that would correspond to adverse events in the parallels. If I had to parallelize
between the locality in the data of the frequency, that these things, the local
patterns appear in the data. And really begin thinking about can we do this automatically? There are very validated,
very well-validated and very well-studied case definitions and those can be the fundamental ground that we can begin
with to see whether we can push this type of thinking forward. It is possible, and as a matter of fact, it is bringing up the
question of how would you do inference based on
these local patterns? And an extremely important
statistical question, and a statistical question
that does have foundations in the field of statistics,
but it is quite different of what currently is being done. I think the Sentinel System provides that particular opportunity with all the things that they have. Not just to use what they
have, but actually use this data, this source, to be able to actually create new ways
of thinking, and new systems, and new methodological approaches. Which actually brings me
into the question of bias, that the two fundamental
papers that I actually thought that really get at the heart of the issue in observational studies, is
the paper by Schuemie et al, as well as the Gruber and
Tchetgen, Tchetgen paper. And their rebuttal of the
authors of the first paper. That is extremely interesting,
I do recommend everybody to read, because it does
really, it puts forward the Schuemie paper a way
of measuring and exhausting for systematic error, rather than trying to eliminate it or ignore it. – Great, thanks, and Joanne, I don’t know if you wanna comment on this,
or the effectiveness issue, too, but just a one comment on that. It seems like there is a
lot of opportunity here. It does seem like we’re gonna have to go another level further on sort
of better, cheaper, faster. This seems like a lot of
computational work to do on the systems that are being built. But please go ahead. – Yeah, I wanted to comment
on the effectiveness, efficacy in particular. I think that is very
important for us to tackle as a community and inevitable. I think we will face that. And so as you know
better than anyone, Mark, there are two separate
but complementary efforts going on, on by what used
to be called IOM, or NASEM, and the other by the Duke-Margolis Center to really try to lay out some principles. – Complementary efforts.
– I said complementary, yeah, I said that, to really
lay out some principles of guidance in terms of the
most important basic elements. When is randomization necessary,
when is observational data good enough and can
give meaningful insights into efficacy or effectiveness. But the second element for
efficacy is part of this learning healthcare system
and predictive modeling. And so if we could look at
real-world data to find out which patients have the
most benefit from the drug, from an efficacy
perspective, which patients are at the highest risk for
having some bad outcome, for which there is an
approved therapy now. And try to predict that. And we can do that now using
a whole host of baseline medical risk factors, but you
can imagine in the future, if we’re doing another VTE study, or another AFib study, we’ll not just be looking at baseline risk factors, we’ll be looking at what
people buy on Amazon, and what they’re watching on Hulu, and how much they binge
watch, et cetera, et cetera. And how that also factors
into the prediction of whether they’re gonna get a VTE, fill in the blank, stroke,
et cetera, et cetera. And that’s coming in the future, I think. And all of that really is effectiveness. And that’s a way that we can
improve the healthcare system, who we’re targeting for
those letters to patients to tell them to, do you
wanna talk to your healthcare provider about how your AFib’s
being treated, et cetera. And then see whether
we’re having an impact on the overall incidence
of either adverse events, or efficacy effectiveness parameters. – Darren?
– So I would like to bring a few sort of points
that we’re discussing. So I will start with a
very pessimistic view, and then I will end with optimism. – Good, I prefer that order. – So we had project in which
we talked to real patients, like what do you feel about
sharing your information? The patients say, it’s all good, I’m happy to share my data with you, if it will help with
improving patient care. In the end, the patient actually said, actually, it’s not a good
idea for you to talk to us, because the reason that
you are able to talk to us is because we are open to sharing data. You should be trying to talk to be people who live in the bunker. They are also taking medications, but we have no way to reach out to them. So as an epidemiologist, we always worry about selection bias. Like if we are only able to talk to people that we can talk to, the
information will be good, but sometimes it might not be good, because you will be so
biased that the information that you are going to
generate might not be useful. So that is something that
has always been in my mind, I think that having this
newer data sources is great. But the representedness of the patients, and the information, is something that we have to think about. And that gets to the
methods that will address this type of selection bias. I think that that is
something that we also have to pay attention to. On the more optimistic side, I think that we might just be one generation away from worrying about privacy or security. The reason why I say this of my relatives, so my younger relatives,
they could care less about sharing their information. Because the information is
being collected in so many ways, and sometimes I just
feel like they overshare. But they don’t care, so
maybe the optimist side is we sort of move on this in society, maybe information will
just be a commodity. That people will be less worried
about sharing information. And that would be helping us in getting this information from
different data sources to do the work that will be
useful for public health. – Any further comments
on this or other topics? And similarly, comments from those of you here this afternoon, yes? – [Lily] Hi, this is
Lily Payne representing Booz Allen Hamilton, I’m just to comment on the patient sharing of data. So if we look at other models
like, if you look at Facebook and Twitter, patients,
or people, tend to share their data when they
feel like they’re a part of a larger community. If you look at Facebook,
people tend to overshare their information about
their personal lives, so I wonder if FDA could
do something to promote that sharing of data among
patients with other patients. – So I don’t know if I’m gonna
answer question directly, but I’m gonna pull in one of Mark’s earlier questions about blockchain. There was a very interesting article in the New York Times
Magazine a few weeks ago. It was nominally about Bitcoin, but it was really about
blockchain as a disruptive technology with this point in mind. But in that author’s mind, the Facebook model was the wrong model. Because it was a centralized model where all the benefits accrued
to a single corporation. And in my understanding of that author’s framing of what blockchain
offers, one is what all sort of think about, which
is this automated trust that allows us to automate
a governance model. But the other point that
he made was that it creates a token economy, and maybe
Mark can weigh in here. – [Mark] No, you’re doing great. – (laughs) And so I think this is a really interesting question. So that article focused on
the patient-level sharing, or the person-level
sharing of information. And to make that, enable
that in a healthcare setting, it would have to be, patients would have to have their full electronic health data. So we don’t have that system yet, although we’ve been talking about it. But there’s an immediate
level, at least in theory. Where the data holders, the data partners, or I would say all the
stakeholders in this effort. So the data holders, the
data analysts, the patients, the regulators, whoever
else wants to use this data, could participate in a token economy, where if they contribute,
whether it be data, or knowledge about methods, or coding, or clinical interpretations,
they would get Sentinel tokens that would allow them to participate more fully in this network. So I’ll stop there before
I get myself into trouble. But that’s probably like a 20-year vision. – It may be a 20-year
vision, but you think about the intermediate steps, and I
did mention at the beginning of my comments about the
broader changes that are taking place in the healthcare system. Where there is this shift
going on towards more patient empowerment and control is coming with these emerging
electronic data technologies, and lots of other
technological changes, too. Along with that, with
healthcare costs what they are, more and more pressure for accountability that treatments that are
being used on patients in real-world practice are
really making a difference for them, both in terms of outcomes, and in doing it as
efficiently as possible. And I do see, in conjunction
with that, a lot more interest. And again, we appreciate
any comments from the panel about how to accelerate
it, growing pressure, and in a good way, to
encourage more use of systems like these to develop evidence
on whether a particular combination of treatments,
in a particular patient, maybe one with multiple chronic diseases, or one who could benefit
from a combination of genomics and all these other dimensions of what could be future
computable phenotypes, would really benefit from a
particular treatment pattern. And at what cost? So changes in payment
for healthcare providers, for drugs, for devices that
are moving in the direction of requiring either more
real-world evidence, or actually some demonstration of impact. Subject to certainly some important methodologic concerns there, too. That seems very reinforcing, for what you all are talking about here. And for the steps towards, I’m not sure we’re gonna get to that economy envisioned in that New York Times
paper by the Bitcoin, and other coin minters, but
certainly in that direction. We are about out of time here. So I would like to, and
maybe this is a good, broad question for it,
invite any final comments for the panel about how
this whole enterprise, certainly FDA led, but by no means, far from FDA alone, how
this whole collaborative enterprise can best move forward. Any final thoughts, Bob? – I’ll just make one brief comment. I think as FDA, we always come back to our enabling legislation. And I think it’s very
important that we do that, because we have this, not only
do have to do this legally, but it creates the core use
case that demands the same, the highest-quality
data, the best methods, which then become the basis for framework for these other uses. So I think that’s where,
at least my focus is, and I think it, but I think it is really, it does lead do all these
other possibilities. – Great, great comments, any others? Well let me ask you stay seated
just for one more minute. But for right now, I’d
like to thank our panel for bringing together a lot of issues, both long-term vision and,
as Bob rightly reminds us, especially in healthcare,
it’s those short-term practical steps that really matter. Thank you all very much. (audience clapping) And I would like to thank all of you. Many of you here have
made contributions today, both to the panels and to
the work that was presented in the panels, as well
as comments, questions, making this as rich an
interaction as possible. I think that’s very
appropriate, given the nature of the Sentinel Initiative
and the further work that is going to take
place to build on it. It is very collaborative,
and very much requires this kind of ongoing working
together to make progress. Though, I do want to thank a few more people specifically, before we end. So obviously, our speakers and panelists, who not only put together
great presentations today, but put in some work before this meeting in helping us come up with the agenda. And work out some of the
details of the sessions. Special thanks to partners at
FDA, in putting this together. Michael Nguyen, Azadeh
Shoaibi, Tyler Coyle, Jamila Mwidau, all of you,
and many others have worked closely with us over the past
months to plan this event. And last, special thanks to our team at Duke-Margolis, Greg
Daniel, Isha Sharma, Sarah Supsiri and Nick
Fiore, thank you guys for keeping me on time, too,
as we finish up right now. And especially, Adam Aten,
for putting this all together. And all of you, have
a safe afternoon home. Hopefully the weather’s
gotten a little bit better. And we look forward to the next steps on this collaborative enterprise. Thank you very much. (audience clapping)

1 comment on “2018 Sentinel Initiative Annual Public Workshop

Leave a Reply

Your email address will not be published. Required fields are marked *