MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

从临床笔记中提取社会支持和隔离信息:演示和系统性能

2025-04-02 05:59:23

Table of Links

Abstract and 1. Introduction

2 Data

2.1 Data Sources

2.2 SS and SI Categories

3 Methods

3.1 Lexicon Creation and Expansion

3.2 Annotations

3.3 System Description

4 Results

4.1 Demographics and 4.2 System Performance

5 Discussion

5.1 Limitations

6 Conclusion, Reproducibility, Funding, Acknowledgments, Author Contributions, and References

\ SUPPLEMENTARY

Guidelines for Annotating Social Support and Social Isolation in Clinical Notes

Other Supervised Models

4 RESULTS

4.1 Demographics

The demographic characteristics of patients within the annotated cohort are detailed in Table 2. Notably, the patient composition at MSHS was younger and more diverse as compared to patients at WCM.

\ Table 2: Patient demographics from annotated data [# of patients (%)]. AA: African American

4.2 System Performance

Analysis was conducted using the gold-standard, manually annotated data. The macro-averaged precision, recall, and f-scores for classifying fine- and coarse-grained SS and SI categories at note level are provided in Table 3.

\ Table 3: Macro-averaged Precision (P), Recall (R), and F-scores (F) of different NLP pipelines for fine and coarse-grained category classification. Here, we used fine-tuning and instruction for FLAN-T5-XL model. The highest scores for individual categories are underlined.

\ At MSHS, the RBS outperformed the LLM-based system for both fine- and coarse-grained classification. For the fine-grained categories, the RBS achieved macro-averaged f-score of 0.90 compared to 0.62 for the LLM. For coarse-grained classification, the RBS had macro-averaged f-score of 0.89 versus 0.65 for the LLM.

\ At WCM, the RBS outperformed the LLM for fine-grained classification with macro-averaged f-scores of 0.82 versus 0.81, respectively. The coarse-grained categories were similar, with a macro-averaged f-score of 0.85 for the RBS compared to 0.82 for FLAN. The performance of the zero-shot FLAN-T5-XL model is provided in Supplementary Table S8.

\ Comparison to ICD codes: There were zero visits associated with the annotated clinical notes where SI was captured by the structured ICD codes (ICD-10: ‘Z60.2’, ‘V60.3’, ‘Z60.4’, ‘Z60.9’; ICD-9: ‘V60.3’, ‘V62.4’ [see Supplementary Table S9]). Without the NLP pipelines, the presence of SI would otherwise be missed both sites.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

:::info Authors:

(1) Braja Gopal Patra, Weill Cornell Medicine, New York, NY, USA and co-first authors;

(2) Lauren A. Lepow, Icahn School of Medicine at Mount Sinai, New York, NY, USA and co-first authors;

(3) Praneet Kasi Reddy Jagadeesh Kumar. Weill Cornell Medicine, New York, NY, USA;

(4) Veer Vekaria, Weill Cornell Medicine, New York, NY, USA;

(5) Mohit Manoj Sharma, Weill Cornell Medicine, New York, NY, USA;

(6) Prakash Adekkanattu, Weill Cornell Medicine, New York, NY, USA;

(7) Brian Fennessy, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(8) Gavin Hynes, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(9) Isotta Landi, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(10) Jorge A. Sanchez-Ruiz, Mayo Clinic, Rochester, MN, USA;

(11) Euijung Ryu, Mayo Clinic, Rochester, MN, USA;

(12) Joanna M. Biernacka, Mayo Clinic, Rochester, MN, USA;

(13) Girish N. Nadkarni, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(14) Ardesheer Talati, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(15) Myrna Weissman, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(16) Mark Olfson, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA, New York State Psychiatric Institute, New York, NY, USA, and Columbia University Irving Medical Center, New York, NY, USA;

(17) J. John Mann, Columbia University Irving Medical Center, New York, NY, USA;

(18) Alexander W. Charney, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(19) Jyotishman Pathak, Weill Cornell Medicine, New York, NY, USA.

:::

\

开发基于规则和 LLM 的系统以识别细粒度类别的提及

2025-04-02 05:55:16

Table of Links

Abstract and 1. Introduction

2 Data

2.1 Data Sources

2.2 SS and SI Categories

3 Methods

3.1 Lexicon Creation and Expansion

3.2 Annotations

3.3 System Description

4 Results

4.1 Demographics and 4.2 System Performance

5 Discussion

5.1 Limitations

6 Conclusion, Reproducibility, Funding, Acknowledgments, Author Contributions, and References

\ SUPPLEMENTARY

Guidelines for Annotating Social Support and Social Isolation in Clinical Notes

Other Supervised Models

3.3 System Description

We developed rule- and LLM-based systems to identify mentions of fine-grained categories in clinical notes. Rules were then used to translate entity-level to note-level classifications and fine-grained to coarse-grained labels as mentioned in Section 3.2. An architecture of the NLP systems is provided in Figure 2.

\ 3.3.1 Rule-based System

\ As aforementioned, a major advantage of the RBS is full transparency in how classification decisions are made. We implemented the system using the open-source spacy Matcher § [41]. Additionally, we compiled a list of exclusion keywords (see Supplementary Table S4) to refine the rules, ensuring relevant identification.

\ Figure 2: Architecture of the rule- and large language model (LLM)-based NLP systems for identifying fine- and coarse-grained categories. For the LLM input, a single clinical note was sliced into multiple sentences due to the restriction of 512 tokens. The sentence-level fine-grained categories were combined to provide document-level fine-grained categories. Finally, the rules in Section 3.2 were used to identify the coarse-grained categories from fine-grained categories.

\ 3.3.2 Supervised Models

\ Expanding on the published literature, we attempted first we began to implement Support Vector Machines (SVMs) and Bidirectional Encoder Representations from Transformers (BERT)-based models at WCM to identify fine-grained categories. However, these models were inappropriate due few SS/SI mentions in the corpus (see Supplementary Material and Table S6).

\ 3.3.3 Large Language Models (LLMs)

\ We developed a semi-automated method to identify SS and SI using an open-source advanced finetuned LLM called “Fine-tuned LAnguage Net-T5 (FLAN-T5)” [42, 43]. We used FLAN-T5 in a “question-answering” fashion to extract sentences from clinical texts with mentions of SS and SI subcategories. A separate fine-tuned model was created for each of the fine-grained categories.

\ Model Selection: T5 has been used for other classification tasks in clinical notes, and the FLAN (Fine-tuned Language Net) version of T5, which employs chain-of-thought (COT) prompting, does not require labeled training data [43]. Five variants of FLAN-T5 are available based on the number of model parameters¶. Guevara et al. [32] observed that FLAN-T5-XL performed better than the smaller models (FLAN-T5-L, FLAN-T5-base, and FLAN-T5-small) with no significant improvement with the larger FLAN-T5-XXL. Thus, we selected FLAN-T5-XL for our experimentation.

\ Zero-shot: Given that LLMs follow instructions and are trained on massive amounts of data, they do not necessarily require labeled training data. This “zero-shot” approach was performed by providing model instruction, context, a question, and possible choice (‘yes,’ ‘no,’ or ‘not relevant’). An example is provided in Table 1. The option ‘no’ was selected for contexts that were negated, and ‘not relevant’ was chosen for those that did not pertain to the subcategory or the question.

\ Fine-tuning: Since FLAN-T5-XL (zero-shot) with instruction had poor F-scores (see Supplementary Table S8), the models were improved by fine-tuning them with synthetic examples that could help the model learn about the specific SS or SI subcategories. For each fine-grained category, about 50 (yes), 50 (no), and 50 (not relevant) examples were created. The synthetic examples themselves became a validation set to fine-tune the parameters. ChatGPT (with GPT 4.0)‖ was used to help craft context examples, but ultimately after several iterations in the validation set, they were refined by the domain experts so that each example was specifically instructive about the inclusions and exclusions of the category. Examples of prompts for loneliness are provided in Table 1. All fine-tuning examples and questions for each subcategory are provided in Supplementary Material and Table S7. Furthermore, giving the LLMs specific stepwise instructions to follow (“instruction tuining”) has been shown to improve performance by reducing hallucinations [44, 45]. Therefore, we added an instruction as a part of the prompt.

\ Parameters: Previously, the parameter-efficient Low-Rank Adaptation (LoRA) fine-tuning method was used with FLAN-T5 models to identify SDOH categories [32]. However, the newer Infused Adapter by Inhibiting and Amplifying Inner Activations (IA3 ) was selected for its better performance [46]. We fine-tuned the data on 15-20 epochs. Fine-tuning parameters can be viewed in our publicly available code∗∗.

\ Table 1: Example of instruction, question, context, choices, and answer for loneliness subcategory model.

\ 3.3.4 Evaluation

\ All evaluations were performed at the note level for both the fine- and coarse-grained categories. To validate the NLP systems, precision, recall, and f-score were all macro-averaged to give equal weight to the number of instances. Instances of emotional support and no emotional support subcategories were rare in the underlying notes (see Supplementary Table S5 for full counts) and therefore the accuracy could not be assessed.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::


§ https://spacy.io/api/matcher

\ ¶ https://huggingface.co/docs/transformers/model doc/flan-t5

\ ‖https://openai.com/blog/chatgpt

\ ∗∗https://github.com/CornellMHILab/Social Support Social Isolation Extraction

:::info Authors:

(1) Braja Gopal Patra, Weill Cornell Medicine, New York, NY, USA and co-first authors;

(2) Lauren A. Lepow, Icahn School of Medicine at Mount Sinai, New York, NY, USA and co-first authors;

(3) Praneet Kasi Reddy Jagadeesh Kumar. Weill Cornell Medicine, New York, NY, USA;

(4) Veer Vekaria, Weill Cornell Medicine, New York, NY, USA;

(5) Mohit Manoj Sharma, Weill Cornell Medicine, New York, NY, USA;

(6) Prakash Adekkanattu, Weill Cornell Medicine, New York, NY, USA;

(7) Brian Fennessy, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(8) Gavin Hynes, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(9) Isotta Landi, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(10) Jorge A. Sanchez-Ruiz, Mayo Clinic, Rochester, MN, USA;

(11) Euijung Ryu, Mayo Clinic, Rochester, MN, USA;

(12) Joanna M. Biernacka, Mayo Clinic, Rochester, MN, USA;

(13) Girish N. Nadkarni, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(14) Ardesheer Talati, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(15) Myrna Weissman, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(16) Mark Olfson, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA, New York State Psychiatric Institute, New York, NY, USA, and Columbia University Irving Medical Center, New York, NY, USA;

(17) J. John Mann, Columbia University Irving Medical Center, New York, NY, USA;

(18) Alexander W. Charney, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(19) Jyotishman Pathak, Weill Cornell Medicine, New York, NY, USA.

:::

\

云遣返:云投资的错误转向

2025-04-02 05:11:58

\ Cloud repatriation – the movement of workloads from public cloud back to on-premises or private infrastructure – is accelerating.

\ According to a recent QA report, as many as 71% of companies have considered repatriation, driven primarily by escalating cloud costs and unexpected financial complexities.

\ In short, cloud wasn’t the blue sky many were promised. But this shift fundamentally misinterprets why cloud investments fail.

\ In this piece, I will show how it’s rather how we’ve failed to understand cloud for what it is. How, along many fronts, treating cloud technologies like bare metal servers has held back true innovation and the kinds of efficiencies offered by true cloud-native approaches.

\ You only have to look at the huge savings possible with practices like FinOps to see that the potential is there.

\ So, why are we retreating from savings?

Here Are 3 Reasons Cloud Initiatives Struggle and Why Repatriation Rarely Provides the Answer

Traditional MSPs Lagging in the Cloud-Native Era

Managed Service Providers (MSPs) critically shape cloud success. However, many MSPs remain trapped in traditional IT practices. They're accustomed to managing servers, patching security vulnerabilities, and performing routine infrastructure maintenance. Cloud-native operations, like serverless computing, container orchestration, and infrastructure automation, demand entirely different skillsets.

\ Partnering with outdated MSPs results in high costs and low ROI while companies pay premium prices without the benefits.

\ This is especially true of those larger cloud hosting businesses that pivoted from bare metal to cloud without sufficiently updating their core business model, leading to a high market penetration by those unable to drive real results.

\ Success demands MSPs fluent in cloud-native practices: automation, Infrastructure as Code (IaC), and dynamic scalability. Without these capabilities, inefficiencies persist, leading businesses to blame the cloud instead of their partnerships.

Skeuomorphic IT Design: Old Habits Die Hard

Another core challenge is "skeuomorphic IT" – replicating familiar on-premises infrastructure in the cloud. Many organisations use a "lift-and-shift" approach, simply moving existing systems without redesign. This shortcut feels comfortable but sacrifices the cloud's true advantages.

\ Cloud efficiencies are driven by taking advantage of virtualisation: ephemerality, elasticity, and resource pooling. Skeuomorphic design undermines this, reducing cloud to just another data center – one managed by Amazon somewhere far away. Real ROI emerges only when teams embrace cloud-native designs and cloud-native thinking.

Outdated Governance Frameworks

Another significant barrier to cloud success is outdated governance frameworks that haven’t evolved alongside cloud technology. Traditional governance, designed for stable, predictable on-premises environments, often conflicts with the dynamic, scalable nature of cloud computing.

\ Companies using legacy governance models struggle to manage rapid provisioning, decentralised decision-making, and the continuous optimisation that cloud demands.

\ Modernising governance means aligning policies and procedures with cloud capabilities, enabling agility and innovation without sacrificing control. Effective governance in the cloud era includes clear visibility, robust security tailored specifically to cloud environments, real-time cost management, and the flexibility to scale quickly.

\ Organisations failing to update their governance frameworks find themselves mired in inefficiency and uncontrolled costs, often leading them to mistakenly blame cloud itself.

The Hidden Costs of Repatriation

The QA report highlights another often-overlooked problem: repatriation itself isn't cheap or easy. Returning workloads to on-premises infrastructure involves significant capital expenditures, increased staffing demands, and renewed responsibility for hardware lifecycle management and security. These hidden costs quickly add up, undermining anticipated savings.

\ Furthermore, repatriation can stall innovation and agility–essential competitive advantages in today’s fast-moving market.

\ So, for businesses stuck between a rock and a hard place, what is the answer?

Embracing Cloud Design Principles: The Real Path to ROI

Repatriation occasionally makes sense, such as for regulatory compliance or specialised technical requirements. But generally, repatriation means retreating rather than solving fundamental issues.

\ True cloud ROI requires culturally, technologically, and organisationally embracing cloud design principles.

This means designing specifically for cloud strengths – leveraging microservices,  serverless platforms, containerised architectures, scalable infrastructures, and managed services. The MACH architecture (Microservices, API-first, Cloud-native, and Headless) model is absolutely the application model of the future.

\ Successful cloud strategies also require partnering with modern MSPs who deeply understand cloud-native operations. They are key in helping organisations without the in-house skills to ride the wave of technological development.

\ On a governance level, adopting FinOps/GreenOps and other cloud-native governance strategies will help align organisations with the potential of cloud.

\ Last but not least, no think piece in 2025 would be complete without a reference to the benefits of AI. However, sometimes a cliche is so because it’s the stone-cold truth. Already, AI is making huge improvements to resource management in cloud workloads - a trend that’s only set to continue.

\ Organisations embracing these comprehensive strategies significantly reduce costs, boost operational efficiency, and accelerate innovation. Instead of retreating, businesses must reassess their cloud approach holistically – partnering effectively, designing intelligently, aligning culturally, and fully exploiting the cloud’s capabilities.

\ The solution isn't repatriation. The solution is evolution. Aligning cloud strategies with clear business goals and genuinely adopting modern cloud practices will deliver the ROI businesses seek.

我们就是这样为开发 NLP 管道创建黄金标准数据的

2025-04-02 05:11:21

Table of Links

Abstract and 1. Introduction

2 Data

2.1 Data Sources

2.2 SS and SI Categories

3 Methods

3.1 Lexicon Creation and Expansion

3.2 Annotations

3.3 System Description

4 Results

4.1 Demographics and 4.2 System Performance

5 Discussion

5.1 Limitations

6 Conclusion, Reproducibility, Funding, Acknowledgments, Author Contributions, and References

\ SUPPLEMENTARY

Guidelines for Annotating Social Support and Social Isolation in Clinical Notes

Other Supervised Models

3.2 Annotations

To create gold standard data for developing the NLP pipelines, we selected 300 notes from 300 unique patients at MSHS and 225 notes from 221 unique patients at WCM for fine- and coarsegrained manual annotation. Notes were chosen from unique patients to maximize the contextual diversity of SS/SI terms (different note-writers, different time periods, and avoiding redundancy caused by copy-forward practices within a single patient’s EHR). To optimize the gold standard annotation set of notes, those selected for review were enriched for mentions of SS and SI: 75 notes were selected that had at least one occurrence of an SI lexicon term, another 75 notes for SS, and finally 75 notes were randomly selected from the reminder of underlying corpus. At MSHS, 75 additional notes were selected that contained a clinical note template to further enrich the annotation corpus for notes in which a clinician was prompted (by the template) to assess SS/SI.

\ The Brat Rapid Annotation Tool (BRAT) [40] was used to annotate the notes manually with the same annotation configuration schema across sites. The annotation guideline‡ and lexicons are provided in Supplementary Tables S3. Initially, the annotations were performed at the entity level (every instance of a lexicon term in the note text) using BRAT. For evaluation, the entity-level annotations were converted to “document” (note) level. For example, if there was a single entity mentioning loneliness and two mentions of instrumental support in a given note, the loneliness and instrumental support subcategories were assigned to that note. Finally, the coarse-grained categories were assigned to each document using rules. SS was assigned to a document if there were one or more mentions of any SS subcategories and similarly, SI was labeled if there were one or more mentions of any SI subcategories. The above note would be annotated with both SI (for loneliness) and SS (for instrumental support).

\ The notes were meticulously reviewed by two annotators and disagreements were resolved by a third adjudicator to create the final gold-standard corpus. For coarse-grained annotation, the inter-annotator agreement (IAA) Cohen’s Kappa scores were 0.92 [MSHS] and 0.86 [WCM]; for fine-grained, 0.77 [MSHS] and 0.81 [WCM]. The counts of fine- and coarse-grained categories found in the gold-standard data are provided in Supplemental Table S5.

\ The rule book was used to train the annotators and were continually updated during the adjudication process. Often, disagreeing annotations could both be seen as correct given the inherent subjectivity of the classification process; however, new rules were created to arrive at one consistent label for edge cases. Sometimes, rules were created for more practical reasons, for example, mentions of ‘psychotherapy’ were excluded from emotional support because otherwise almost every note in the MSHS psychiatric corpus would be flagged. Of note, mentions were only labelled when SS/SI was explicit and not implied. For example, a mention of ‘boyfriend’ or ‘living alone’ without further context would not count. The general subcategory became a “catch-all” for mentions that clearly involved support or isolation, but a single fine-grained category could not be discerned. For example, ‘staying on his best friend’s couch’ could be seen as instrumental support (providing shelter), social network (having friends), or emotional support (best friend implies a level of closeness). At both institutions, the IAA was reflective of the subjective, overlapping nature of the fine-grained subcategories. Another reason for disagreements between annotators was the site-specific familiarity required to recognize acronyms and social services, e.g., ‘HASA stands for the HIV/AIDS Services Administration.’

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::


‡ rule book and annotation guideline are used interchangeably

:::info Authors:

(1) Braja Gopal Patra, Weill Cornell Medicine, New York, NY, USA and co-first authors;

(2) Lauren A. Lepow, Icahn School of Medicine at Mount Sinai, New York, NY, USA and co-first authors;

(3) Praneet Kasi Reddy Jagadeesh Kumar. Weill Cornell Medicine, New York, NY, USA;

(4) Veer Vekaria, Weill Cornell Medicine, New York, NY, USA;

(5) Mohit Manoj Sharma, Weill Cornell Medicine, New York, NY, USA;

(6) Prakash Adekkanattu, Weill Cornell Medicine, New York, NY, USA;

(7) Brian Fennessy, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(8) Gavin Hynes, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(9) Isotta Landi, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(10) Jorge A. Sanchez-Ruiz, Mayo Clinic, Rochester, MN, USA;

(11) Euijung Ryu, Mayo Clinic, Rochester, MN, USA;

(12) Joanna M. Biernacka, Mayo Clinic, Rochester, MN, USA;

(13) Girish N. Nadkarni, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(14) Ardesheer Talati, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(15) Myrna Weissman, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(16) Mark Olfson, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA, New York State Psychiatric Institute, New York, NY, USA, and Columbia University Irving Medical Center, New York, NY, USA;

(17) J. John Mann, Columbia University Irving Medical Center, New York, NY, USA;

(18) Alexander W. Charney, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(19) Jyotishman Pathak, Weill Cornell Medicine, New York, NY, USA.

:::

\

在哪里以及如何捐赠加密货币?

2025-04-02 05:06:09

\ Cryptocurrency is becoming a bigger part of our daily lives. While it was initially seen as a tool for tech enthusiasts and investors, it is now widely accepted for purchases, investments, and even charitable donations. Donating cryptocurrency allows individuals to contribute to causes they care about without converting their digital assets into fiat currency.

\ Although many charitable organizations recognize the potential of cryptocurrency donations, they often face significant challenges in accepting them. Legal restrictions, compliance issues, and tax regulations vary by country, making it difficult for some charities to accept crypto. Additionally, technical challenges such as wallet security, transaction processing, and staff training require extra resources and effort.

\ To bridge this gap, several initiatives have emerged to make crypto donations more accessible. These projects aim to simplify the donation process for both donors and nonprofits, ensuring that more organizations can benefit from cryptocurrency contributions.

\n One of the most notable initiatives in this space is Donate Crypto (https://donate-crypto.online), launched in 2023. The project was created by David Dawson, a lead systems analyst at Google, who has a passion for using technology for good. In his spare time, he and a group of engineers work on nonprofit projects, and Donate Crypto is their most impactful initiative to date. \n

David and his colleagues were originally donating to charities through fundraising challenges and campaigns. However, they realized that many organizations struggled to accept cryptocurrency donations.

\

“I once tried to donate 12 Monero (XMR) to a charity, but I couldn’t find a single organization that accepted it. That’s when I knew we needed to fix this,” 

\ -David Dawson

\ David

\ In response to this issue, they created Donate Crypto, a platform that simplifies crypto donations by connecting donors with vetted charities. The platform allows users to select the causes they want to support, such as environmental conservation, racial justice, or healthcare initiatives. Once a donor chooses a cause, they can contribute using one of over 50 supported cryptocurrencies. Donate Crypto then automatically distributes the funds to trusted organizations that align with the selected cause.

\ One of the core principles of Donate Crypto is transparency. The platform partners only with well-established, highly reputable charities that meet strict accountability standards. Additionally, Donate Crypto remains politically neutral.

\ “We are completely apolitical. Right now, we don’t work with any political organizations. Our only goal is to make the world a better place,” says David.

\ Another unique aspect of Donate Crypto is its funding model. Unlike many donation platforms that charge transaction fees, Donate Crypto operates without taking a commission. The project is fully funded by David’s personal contributions and a few private donors. “We have minimal operational costs, and we don’t intend to profit from this,” he explains. Every single crypto donation goes directly to charity.

\ Since its launch, Donate Crypto has seen remarkable success. In 2023, the platform raised $469,901 in cryptocurrency donations, and in 2024, that amount grew to $781,390. The team has ambitious plans for the future, setting a goal to surpass $1 million in donations by the end of 2025.

\ “We love being efficient,” David shares. “We have strong connections in the crypto community, and we want to make an impact in the nonprofit sector.”

\ As cryptocurrency continues to gain mainstream adoption, platforms like Donate Crypto demonstrate how digital assets can be used for good. By making it easier for charities to accept crypto donations, these initiatives empower donors to support causes they care about in a modern and efficient way. If you’re looking for a way to contribute using cryptocurrency, Donate Crypto is a great place to start.

我们如何使用迭代法为 SS 和 SI 的细粒度类别收集词库

2025-04-02 05:02:06

Table of Links

Abstract and 1. Introduction

2 Data

2.1 Data Sources

2.2 SS and SI Categories

3 Methods

3.1 Lexicon Creation and Expansion

3.2 Annotations

3.3 System Description

4 Results

4.1 Demographics and 4.2 System Performance

5 Discussion

5.1 Limitations

6 Conclusion, Reproducibility, Funding, Acknowledgments, Author Contributions, and References

\ SUPPLEMENTARY

Guidelines for Annotating Social Support and Social Isolation in Clinical Notes

Other Supervised Models

3 METHODS

3.1 Lexicon Creation and Expansion

The computational approaches to any NLP tasks require annotated lexicons and gold standard data [2]. We collected lexicons for fine-grained categories of SS and SI using an iterative method that included manual chart reviews and semi-automatic methods.

\ 3.1.1 Manual Chart Review

\ Zhu et al. [26] developed a lexicon for identifying SI from clinical notes of patients with prostate cancer in the context of recovery support. Initially, this lexicon, which included 24 terms, was selected; however, it yielded relatively fewer clinical notes at MSHS and WCM compared to the published report. A list of terms for each category was created and extensively reviewed by the study team which included clinical psychiatrists and psychologists. We manually reviewed 50 notes at each site to find SS and SI keywords to enrich the existing lexicons.

\ 3.1.2 Semi-automatic Method

\ The lexicons from manual chart review as above were enhanced using word embeddings. First, the manually generated lexicons were vectorized using word2vec [39] and Equation 1.

\

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::


† https://radimrehurek.com/gensim/

:::info Authors:

(1) Braja Gopal Patra, Weill Cornell Medicine, New York, NY, USA and co-first authors;

(2) Lauren A. Lepow, Icahn School of Medicine at Mount Sinai, New York, NY, USA and co-first authors;

(3) Praneet Kasi Reddy Jagadeesh Kumar. Weill Cornell Medicine, New York, NY, USA;

(4) Veer Vekaria, Weill Cornell Medicine, New York, NY, USA;

(5) Mohit Manoj Sharma, Weill Cornell Medicine, New York, NY, USA;

(6) Prakash Adekkanattu, Weill Cornell Medicine, New York, NY, USA;

(7) Brian Fennessy, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(8) Gavin Hynes, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(9) Isotta Landi, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(10) Jorge A. Sanchez-Ruiz, Mayo Clinic, Rochester, MN, USA;

(11) Euijung Ryu, Mayo Clinic, Rochester, MN, USA;

(12) Joanna M. Biernacka, Mayo Clinic, Rochester, MN, USA;

(13) Girish N. Nadkarni, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(14) Ardesheer Talati, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(15) Myrna Weissman, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA and New York State Psychiatric Institute, New York, NY, USA;

(16) Mark Olfson, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA, New York State Psychiatric Institute, New York, NY, USA, and Columbia University Irving Medical Center, New York, NY, USA;

(17) J. John Mann, Columbia University Irving Medical Center, New York, NY, USA;

(18) Alexander W. Charney, Icahn School of Medicine at Mount Sinai, New York, NY, USA;

(19) Jyotishman Pathak, Weill Cornell Medicine, New York, NY, USA.

:::

\