Skip to main content

Current Reading:

 

February 2023

WEEK 1

Reading 1:

“Cognitive behavior therapy vs. control conditions, other psychotherapies, pharmacotherapies and combined treatment for depression: a comprehensive meta-analysis including 409 trials with 52,702 patients”

Pim Cuijpers, Clara Miguel, Mathias Harreret al.

World Psychiatry, February 2023

The full World Psychiatry paper can be found here:https://onlinelibrary.wiley.com/doi/10.1002/wps.21069

 

The authors of this paper present a new meta-analysis examining the evidence for CBT for depression and found the following:

·         3,584 full-text papers were considered with 409 trials selected including 52,702 patients (27,000 in CBT and 25,702 in control groups).

·         Demographics. Patients tended to be female (69.0%), with a mean age of 40.1; many were adults (39.1%).

·         CBT versus control. “CBT had moderate to large effects compared to control conditions such as care as usual and waitlist (g=0.79...), which remained similar in sensitivity analyses and were still significant at 6-12 month follow-up.”

·         CBT versus other psychotherapies. “CBT was significantly more effective than other psychotherapies, but the difference was small (g=0.06...) and became non-significant in most sensitivity analyses.”

·         CBT versus pharmacotherapies and combined treatment. “Combined treatment was more effective than pharmacotherapies alone at the short (g=0.51...) and long term (g=0.32...), but it was not more effective than CBT alone at either time point.”

·         On the quality of the literature. “The quality of the trials was found to have increased significantly over time (with increasing numbers of trials with low risk of bias, less waitlist control groups, and larger sample sizes).

Reading 2:

The need for a new generation of digital mental health tools to support more accessible, effective and equitable care”

John TorousKeris Myrick, Adrian Aguilera

World Psychiatry, February 2023

The full World Psychiatry Editorial can be found here:https://onlinelibrary.wiley.com/doi/10.1002/wps.21058

 

This article discusses the promise and perils afforded by digital mental health.   The authors note that many of the most vulnerable and at risk patients lack the means/ability to access such interventions.  They also examine the potential biases that can exist with AI and the challenges which "digital disparities" impose upon the uptake and utilization of digital mental health care.

 

Reading 3:

Physician-as-Patient—Vulnerabilities and Strengths

Karla Castro-FrenzelJAMA, 20 December 2022

The full JAMA paper by found here:
https://jamanetwork.com/journals/jama/fullarticle/2799650

 

In this moving article, Dr. Castro-Frenzel reflects on being a physician-patient as she faced advanced stage lung cancer.  Dr. Castro-Frenzel elloquently examines the fear that comes with possessing medical training and the hope and support she was able to receive from colleagues.  The culture of medicine makes it challenging for physicians to accept help or acknowledge vulnerability, but this reading provides a revealing portrait of a colleague navigating the health system as both patient and peer.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

WEEK 2

Reading 1:

Single-Dose Psilocybin for a Treatment-Resistant Episode of Major Depression”

Guy M. Goodwin, Scott T. Aaronson, Oscar Alvarez, et al.

The New England Journal of Medicine, 3 November 2022

The full NEJM article can be found here:
https://www.nejm.org/doi/full/10.1056/NEJMoa2206443

This paper presents the results of a phase 2 double-blind trial of a single dose of a proprietary synthetic psilocybin at a dose of 25 mg, 10 mg, or 1 mg (control) along with  psychological support for adults with treatment resistant depression.  The administration session lasted 6-8 hours and included having a lead therapist and assistant therapist present.  Participants received two integration sessions following the administration session for the purpose of supporting participants in deriving insights from the psilocybin treatment session.  The authors found that “participants with treatment-resistant depression, psilocybin at a single dose of 25 mg, but not 10 mg, reduced depression scores significantly more than a 1-mg dose over a period of 3 weeks but was associated with adverse effects.”  Serious adverse effects were noted in 9% of subjects in 25 mg group from day 2 to week 3 and included suicidal ideation and self-injurious behavior.   

Reading 2:

“‘I was Close to Helping him but Couldn’t Quite get There’: Psychiatrists’ Experiences of a Patient’s Death by Suicide”​Zainab Furqan, Rachel Beth Cooper, Andrew Lustig, et al. ​

The Canadian Journal of Psychiatry, 19 September 2022 First Online

The full Canadian Journal of Psychiatry article can be found here:
https://journals.sagepub.com/doi/full/10.1177/07067437221125300

 

This article examines the reactions of psychiatrists to a patient's suicide.  The results were obtained by interviewing eighteen psychiatrists about their experiences and had three main objectives:  " (1) understanding the emotional and behavioural impact of a patient's suicide on psychiatrists, (2) exploring if and how the experience of a patient's suicide results in changes in psychiatrist practice patterns, and (3) understanding the tangible steps that psychiatrists and institutions take to manage the emotional and behavioural impact of patient suicide on psychiatrists."  The authors find that there are a number of institutional and educational interventions that can aid psychiatrists who coping with a patient suicide.

Reading 3:

“TikTok and Attention-Deficit/Hyperactivity Disorder: A Cross-Sectional Study of Social Media Content Quality”Anthony Yeung, Enoch Ng, Elia Abi-Jaoude​

The Canadian Journal of Psychiatry, December 2022

The full Canadian Journal of Psychiatry article can be found here:
https://journals.sagepub.com/doi/full/10.1177/07067437221082854

 

TikTok is a popular social media platform that is especially popular with teens.  The platform has contributed to increased awareness of ADHD.  While social media can help reduce stigma and connect patients, it can also provide misinformation about medical conditions. The authors of this article studied the quality of mental health information provided by TikTok videos related to ADHD.  They found that upwards of half of the videos reviewed were misleading (52%).  The article suggests: "It is thus important for clinicians to be aware of the dissemination of misleading videos on TikTok and the potential impact on clinical care."

 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------

WEEK 3

Reading 1: (Submitted by Dr. Ramanathan with editorial)

"Clinical effectiveness of active Apha-Stim AID versus sham Alpha-Stim AID in major depression in primary care in England (Alpha-Stim-D)"

The full Lancet article can be found here:

 

 "Alpha-stim is a “cranioelectrotherapy” device – a form of non-invasive neuromodulation in which electrical clips are attached to earlobes in an attempt to modulate brain activity. Theoretical advantages to this are they are relatively cheap, able to be used at home, and very low voltage devices (i.e., relatively safe). Veterans occasionally have heard of and ask about this. 

 

It is unclear the mechanism of action – but probably a best guess, beyond simply stimulation of the earlobes, is that if this therapy does anything, it may work by providing some low level auricular nerve stimulation that indirectly modulates the vagus nerve – but this is highly speculative. Another reasonable speculation is that this device works via activation of placebo. The low current and position of electrodes do not allow for any realistic direct effects on the brain that we might see with TMS, or even other forms of transcranial electrical stimulation which are typically delivered using higher current electrodes directly applied to the brain.  

 

There was a recent large RCT for this device in the treatment of moderate depression (PHQ-9 between 10-19, see attached pdf above) with very unimpressive results unfortunately – Alpha-stim did not beat sham for  any outcomes (primary or secondary). "

 

Reading 2:

"An Introduction to ChatGPT"

By Dr. Scott Patten

From Dr. David Gratzer

 

This essay by Dr. Scott Patten explores the possible future of AI generated scholarship.  The prospect of such publications led JAMA to publish an editorial (https://jamanetwork.com/journals/jama/fullarticle/2801170?resultClick=1) and updated its instructions to authors to include:  “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.”

 

 ChatGPT is a natural language processing model produced by “Open AI” – an American organization dedicated to developing AI applications and making the code and patents freely available to the public. This organization was founded in 2015 (one of its original founders was Elon Musk). It also has a for-profit subsidiary that is aligned with Microsoft.   But I am wasting my time here. Instead of writing this essay, why not ask ChatGPT to describe itself? SP: “Write a brief history of ChatGPT” ChatGPT responds: 

ChatGPT is a language model developed by OpenAI, a leading artificial intelligence research organization. The model is based on the transformer architecture, a type of neural network that has revolutionized the field of natural language processing.ChatGPT was first introduced in 2019 as an improvement over OpenAI's GPT-2, a large language model that was capable of generating human-like text. The development of ChatGPT was a significant step forward in the advancement of AI and its ability to understand and generate human language.Since its release, ChatGPT has been widely used in a variety of applications, including chatbots, customer service, language translation, and text generation. The model has received recognition for its ability to generate high-quality text, understand context, and answer questions accurately.In recent years, OpenAI has continued to improve ChatGPT, making it even more powerful and versatile. The model has grown in size, with an increased number of parameters, and has been fine-tuned on a variety of tasks and languages to improve its performance.In conclusion, ChatGPT is a state-of-the-art language model developed by OpenAI that has been widely adopted in various applications due to its ability to generate human-like text and understand context. The model has a rich history of advancements and improvements, making it a powerful tool in the field of natural language processing.

 The model represents an advancement over prior versions of this language model – but this one has caused a much greater stir for two reasons. First, it is able to understand the question being asked. Second, the response provided has qualities that we instinctively recognize as a human voice. Like previous versions (and other natural language models), it is likely to be widely used, e.g. serving a customer service role on websites, but the human-like tone of the “speech” it produces has led to widespread concerns. One concern arises from teachers and other educators who note that it could be a tool for plagiarism. For example, asking ChatGPT to write a brief summary of lithium safety one receives information that is consistent with what might be produced by an undergraduate student. It lacks the depth of information that would be expected of an expert, but certainly could have been written by a human:SP: “Write a brief overview of lithium safety monitoring in psychiatry”ChatGPT responds: 

I can provide a brief overview on the topic of lithium safety monitoring in psychiatry:Lithium is a commonly used mood stabilizer in the treatment of bipolar disorder and major depression. However, its use requires careful monitoring due to its narrow therapeutic index and the potential for toxicity. To ensure the safety and effectiveness of lithium therapy, regular monitoring of serum lithium levels and kidney function is necessary.The therapeutic range of lithium levels is typically between 0.6-1.2 mMol/L. Levels above 1.5 mMol/L can lead to toxicity, while levels below 0.5 mMol/L may be ineffective. Toxicity can manifest in a variety of symptoms, including nausea, vomiting, diarrhea, tremors, confusion, and seizures. In severe cases, lithium toxicity can lead to coma and death.To monitor the safety and effectiveness of lithium therapy, it is recommended that patients have their serum lithium levels measured at least every 3-6 months. In addition, regular monitoring of kidney function is necessary as lithium can cause renal impairment. Creatinine clearance, serum creatinine, and electrolyte levels should also be measured to assess kidney function.It is important to note that there are various factors that can affect lithium levels and kidney function, including changes in medication, diet, fluid intake, and underlying medical conditions. Therefore, it is important for patients to inform their healthcare provider of any changes in their health or medication regimen.In conclusion, lithium is an effective mood stabilizer for the treatment of bipolar disorder and major depression, but its use requires careful monitoring to ensure safety and effectiveness. Regular monitoring of serum lithium levels and kidney function is necessary to prevent toxicity and ensure the optimal therapeutic response. Patients should inform their healthcare provider of any changes in their health or medication regimen to ensure proper monitoring and management of their lithium therapy.

 There have also been concerns among academic journal editors that products of this kind of language model could be produced deceptively as a means of producing academic articles. This seems unlikely since ChatGPS is a language model that in its present version doesn’t search the internet and its only content knowledge consists of materials that it collected in 2021 from the internet at that time. When I ask it who won the 2022 Grey Cup, it acknowledges this limitation in a way that still sounds quite human: As my training data only goes up until 2021, I don't have information on the winner of the 2022 Grey Cup. The Grey Cup is the championship game of the Canadian Football League (CFL) and is held annually. I suggest checking current sources such as news websites or the CFL's official website for up-to-date information on the winner of the 2022 Grey Cup. Until recently, universities prohibited the use of computer-generated text in assignments and exams – but many are now giving up the fight because of ChatGPT, acknowledging that this might be a way to quickly produce preliminary drafts of commentaries and essays. Transparency about the use of such tools is now often the stated goal, rather than prohibition. ChatGPT’s responses sometimes include incorrect information. When I asked it questions about myself a few days ago, it incorrectly asserted that I graduated from the University of Toronto. The same question a few days later resulted in a response saying that it can only comment on public figures. Its responses are being refined by user feedback. Instead of providing inaccurate answers, ChatGPT increasingly responds by acknowledging that “as a language model” I cannot answer that question. Similarly, it cannot distinguish misinformation or propaganda from other types of information found on the internet. Consequently, users have been able to elicit racist statements from ChatGPT. However, it is subject to training influences and is open sourced, so its responses improve with feedback, for example:  SP: “Are vaccines safe?” ChatGPT responds: 

Yes, vaccines are safe. Vaccines have been rigorously tested for safety and effectiveness before being approved for use and continuously monitored for safety after they are on the market. According to the World Health Organization (WHO), vaccines are one of the safest and most effective ways to prevent disease.Like all medical interventions, vaccines can have side effects, but the benefits of vaccination far outweigh the risks. The side effects of vaccines are generally mild, such as soreness at the injection site, and usually go away on their own within a few days. Serious side effects from vaccines are extremely rare, and the risk of having a serious reaction is much lower than the risk of serious complications from the diseases that the vaccine helps to prevent.In conclusion, vaccines are safe and play a critical role in protecting people against serious and potentially life-threatening diseases. It's important to rely on credible sources of information, such as the WHO, the Centers for Disease Control and Prevention (CDC), and local public health authorities, when evaluating the safety of vaccines.

 An interesting feature of ChatGPT is that in addition to natural languages, it can also write code: SP: “How can I fit a logistic regression model in Python” ChatGPT responds: 

In Python, you can fit a logistic regression model using the scikit-learn library. Here's a basic example of how to fit a logistic regression model in Python: pythonCopy codeimport pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression # Load the data into a Pandas DataFramedata = pd.read_csv("data.csv") # Split the data into training and testing setsX_train, X_test, y_train, y_test = train_test_split(data.iloc[:, :-1], data.iloc[:, -1], test_size=0.2) # Fit the logistic regression model to the training datamodel = LogisticRegression() model.fit(X_train, y_train) # Predict the response values for the test datay_pred = model.predict(X_test) # Print the model's accuracyprint("Accuracy:", model.score(X_test, y_test))  In the example above, the data is loaded into a Pandas DataFrame using pd.read_csv("data.csv")The data is then split into training and testing sets using train_test_split(), which splits the data into 80% training data and 20% testing data by default. The logistic regression model is then fit to the training data using model.fit(X_train, y_train). Finally, the model's accuracy is calculated and printed using model.score(X_test, y_test). Note that this is just a basic example, and you may need to modify the code to fit your specific use case. For example, you may need to perform feature scaling, regularization, or other preprocessing steps to improve the model's performance.

 The concerns are the same – here, it assumed I wanted to use logistic regression to cross-validate a training dataset. If I didn’t know what this was, I might have been pointed in the wrong direction. However, its code works and is more natural to look at than the help file in Python’s documentation. In summary, ChatGPT is a natural language model that represents a major advance due to the human-like qualities of its responses to questions posed in natural language. However, its AI advances are mostly related to its language processing so it seems human, which seems and feels amazing, but in my opinion, it is actually just regurgitating information from the internet in a human-like way. A recent Nature Commentary (https://www.nature.com/articles/d41586-023-00288-7) basically agrees with this opinion, but states that it is a “premature assumption” that this state of affairs will continue. The Commentary notes that in the not-too-distant future AI may be able to “…summarize literature, draft and improve papers, as well as identify research gaps and write computer code, including statistical analyses.” The concern here is that the natural language used in communicating these activities may make it difficult to clarify the role of AI when a scientific report is published. This concern is enlarged due to the factual errors often made by ChatGPT. These concerns are similar with those that ChatGPT may be used to generate student essays, although plagiarism detectors typically give high scores to ChatGPT content – it has been noted that running the text through grammar-checking software (now widely used by students) lowers those scores a lot. The JAMA Network has stated that ChatGPT should not be listed as an author of papers, and that if it is used as a writing tool this should be clearly stated (https://jamanetwork.com/journals/jama/fullarticle/2801170). Authors retain the responsibility for referencing sources and ensuring factual accuracy. This emphasis on transparency is consistent with current scientific values. A recent editorial identified four papers in which Chat-GPT was listed as an author, but they were all commentaries and were generally not published in credible journals, except for one that escaped the notice of an inattentive editor (https://www.nature.com/articles/d41586-023-00107-z).  Despite all of this, if you look at the essay written by ChatGPT about its potential role in psychiatry, it seems to try to sell itself in particular roles, for example, in a possible triage function in clinical settings. This is not based on evidence-based, which doesn’t seem very transparent at all: SP: “Write a presentation on the role of ChatGPT in psychiatry for an audience of psychiatrists and psychiatry residents” ChatGPT responds: 

Good morning/afternoon everyone. Today, I would like to talk to you about a state-of-the-art technology in the field of language processing: ChatGPT.ChatGPT is a Generative Pre-trained Transformer-based language model, created and developed by OpenAI. It was fine-tuned on a diverse range of internet text data, making it capable of generating human-like responses to a wide range of topics.In the context of psychiatry, ChatGPT can be a valuable tool in several ways. First, it can help in triaging patients who need immediate attention. For example, if someone is in a mental health crisis, they can reach out to ChatGPT for support and receive a prompt response. This can be particularly helpful for individuals who may not have access to immediate human support or who may feel uncomfortable speaking to a human.Second, ChatGPT can be used as a tool to educate patients on mental health topics and provide them with information that can help improve their well-being. For example, if a patient is struggling with depression and doesn't know much about the condition, they can engage in a conversation with ChatGPT and receive helpful information and resources.Finally, ChatGPT can also be used as a tool for research purposes. By analyzing the conversations it has with patients, researchers can gain valuable insights into patient behaviors and mental health patterns. This information can then be used to improve existing mental health treatments and develop new ones.In conclusion, ChatGPT is a powerful technology with many applications in the field of psychiatry. Whether it's for triaging patients, educating patients, or conducting research, ChatGPT has the potential to make a significant impact on mental health care.Thank you for your attention. Are there any questions?

 Perhaps it is best to use ChatGPT for applications that are purely linguistic rather than truly intelligent, at least at this stage, so I’ve asked it to write a poem about psychiatric residency: 

Psychiatric training, a journey untold, A path filled with challenges, wisdom to hold. With each patient’s story, a lesson to learn, A journey of healing, a bridge to burn.In the depths of the mind, lies a world so vast, With secrets and fears, from the past. As a psychiatry resident, one must be bold, To navigate this realm, and find what is untold.With compassion and care, the healing begins, With a listening ear, and a gentle grin. For each person is unique, with a story to share, And a journey to embark, to find solace and repair.Through rounds and lectures, the knowledge grows, With guidance from mentors, the resident flows. Learning to diagnose, and treatments to apply, The future is bright, with no room to shy.So here's to the residents, on this noble quest, With hearts full of hope, they'll do their best. To bring comfort and peace, to those in need, 

And to make a difference, in this world indeed.

 

Reading 3: (Submitted by Dr. Jeff Daskalakis)

“Response to acute monotherapy for major depressive disorder in

randomized, placebo controlled trials submitted to the US Food

and Drug Administration: individual participant data analysis”

Marc Stone, Zimri Yaseen, Brian Miller, Kyle Richardville, Shamir Kalaria, Irving Kirsch

BMJAugust 2022

The full BMJ article can be found here:
 https://www.bmj.com/content/378/bmj-2021-067606

 

Also, please see the attached The Economist article highlighting this paper’s results.

The authors of this article analyzed 232 randomized, double blind, placebo controlled trials of drug monotherapy for major depression submitted to FDA by pharmaceutical companies between 1979 and 2016.  To assess outcome, the authors converted the responses to Hamilton Rating Scale for Depression (HAMD17) equivalent scores when other measures were used to assess efficacy.  The results showed that “the random effects mean difference between drug and placebo favored drug (1.75 points, 95% confidence interval 1.63 to 1.86).”  What was interesting was that the authors found that a trimodal model best explained the response distributions, designated as Large, Non-specific, and Minimal responses (mean improvements from baseline to end of treatment of 16.0, 8.9 and 1.7 points respectively).  Furthermore, they estimate that 15% of patients have a “Large” response with substantial improvement in symptoms.  What this suggests is that determining which patients will fall into this “Large” response category through better predictive models and measurement-based care will help reduce the pitfalls associated with the current trial-and-error approach to antidepressant prescribing.

 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

January 2023