<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://hdl.handle.net/2451/41677">
    <title>FDA Community:</title>
    <link>http://hdl.handle.net/2451/41677</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://hdl.handle.net/2451/75648" />
        <rdf:li rdf:resource="http://hdl.handle.net/2451/63311" />
        <rdf:li rdf:resource="http://hdl.handle.net/2451/62255" />
        <rdf:li rdf:resource="http://hdl.handle.net/2451/61227" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-11T14:52:58Z</dc:date>
  </channel>
  <item rdf:about="http://hdl.handle.net/2451/75648">
    <title>Neural Speech Decoding and Understanding Leveraging Deep Learning and Speech Synthesis</title>
    <link>http://hdl.handle.net/2451/75648</link>
    <description>Title: Neural Speech Decoding and Understanding Leveraging Deep Learning and Speech Synthesis
Authors: Chen, Xupeng
Abstract: Decoding human speech from neural signals is essential for brain-computer interface (BCI) technologies, restoring communication in individuals with neurological deficits. However, this remains a highly challenging task due to the scarcity of paired neural-speech data, signal complexity, high dimensionality, and the limited availability of public tools. We first present a deep learning-based framework comprising an ECoG Decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters, and a novel source-filter-based speech synthesizer that reconstructs spectrograms from those parameters. A companion audio-to-audio auto-encoder provides reference features to support decoder training. This framework generates naturalistic and reproducible speech and generalizes across a cohort of 48 participants. Among the tested architectures, the 3D ResNet achieved the best decoding performance in terms of Pearson Correlation Coefficient (PCC=0.804), followed closely by a SWIN model (PCC=0.796). Our models decode speech with high correlation even under causal constraints, supporting real-time applications. We successfully decoded speech from participants with either left and right hemisphere coverage, which may benefit patients with unilateral cortical damage. We further perform occlusion analysis to identify cortical regions relevant to decoding.&#xD;
&#xD;
We next investigate decoding from different forms of intracranial recordings, including surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes, to generalize neural speech decoding across participants and diverse electrode modalities. Most prior works are constrained to 2D grid-based ECoG data from a single patient. We aim to design a deep-learning model architecture that can accommodate variable electrode configurations, support training across multiple subjects without subject-specific layers, and generalize to unseen participants. To this end, we propose SwinTW, a transformer-based model that leverages the 3D spatial locations of electrodes rather than relying on a fixed 2D layout. Subject-specific models trained on low-density 8×8 ECoG arrays outperform prior CNN and transformer baselines (PCC=0.817, N=43). Incorporating additional electrodes—including strip, grid, and depth contacts—further improves performance (PCC=0.838, N=39), while models trained solely on sEEG data still achieve high correlation (PCC=0.798, N=9). A single multi-subject model trained on data from 15 participants performs comparably to individual models (PCC=0.837 vs. 0.831) and generalizes to held-out participants (PCC=0.765 in leave-one-out validation). These results demonstrate SwinTW’s scalability and flexibility, particularly for clinical settings where only depth electrodes—commonly used in chronic neurosurgical monitoring—are available. The model’s ability to learn from and generalize across diverse neural data sources suggests that future speech prostheses may be trained on shared acoustic-neural corpora and applied to patients lacking direct training data.&#xD;
&#xD;
We further investigate two complementary latent spaces for guiding neural speech decoding to enhance interpretability and structure in decoding further. HuBERT offers a discrete, phoneme-aligned latent space learned via self-supervised objectives. Decoding sEEG signals into the HuBERT token space improves intelligibility by leveraging pretrained linguistic priors. In contrast, the articulatory space provides a continuous, interpretable embedding grounded in vocal tract dynamics. The articulatory space enables speaker-specific speech synthesis through differentiable articulatory vocoders and is especially suited for both sEEG and sEMG decoding, where signals reflect muscle movements linked to articulation. While HuBERT emphasizes linguistic structure, the articulatory space provides physiological interpretability and individual control, making them complementary in design and application. We demonstrate that both spaces can serve as intermediate targets for speech decoding across invasive and non-invasive modalities. As a future direction, we extend our articulatory-guided framework toward sentence-level sEMG decoding and investigate phoneme classifiers within articulatory space to assist decoder training. These developments and the design of more advanced single- and cross-subject models support our long-term goal of building accurate, interpretable, and clinically deployable speech neuroprostheses.</description>
    <dc:date>2025-05-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/2451/63311">
    <title>OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis</title>
    <link>http://hdl.handle.net/2451/63311</link>
    <description>Title: OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis
Authors: Basak Chowdhury, Animesh
Abstract: Logic synthesis is a challenging and widely-researched combinatorial optimization  problem  during  integrated  circuit  (IC)  design.   It  transforms  a  high-level description  of  hardware  in  a  programming  language  like  Verilog  into  an  optimized  digital  circuit  netlist,  a  network  of  interconnected  Boolean  logic  gates,that implements the function.  Spurred by the success of ML in solving combinatorial  and  graph  problems  in  other  domains,  there  is  growing  interest  in  the design of ML-guided logic synthesis tools.   Yet,  there are no standard datasets or  prototypical  learning  tasks  defined  for  this  problem  domain.   Here,  we  de-scribe OpenABC-D, a large-scale, labeled dataset produced by synthesizing opensource  designs  with  a  leading  open-source  logic  synthesis  tool  and  illustrate its use in developing, evaluating and benchmarking ML-guided logic synthesis.OpenABC-D  has  intermediate  and  final  outputs  in  the  form  of  870,000  And-Inverter-Graphs (AIGs) produced from 1500 synthesis runs plus labels such as the node counts, longest path, area, and timing of the AIGs. We define four learning  problems  on  this  dataset  and  benchmark  existing  solutions  for  these  problems.  &#xD;
The codes related to dataset creation and benchmark models are available at: https://github.com/NYU-MLDA/OpenABC.git.&#xD;
The dataset generated is available during a review period at this location: https://app.globus.org/file-manager?origin_id=ae7b03ad-9e50-472c-9601-ff99054ae47c&amp;origin_path=%2F. &#xD;
The data will be published here following the review.</description>
    <dc:date>2021-09-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/2451/62255">
    <title>AI and Procurement - A Primer</title>
    <link>http://hdl.handle.net/2451/62255</link>
    <description>Title: AI and Procurement - A Primer
Authors: Sloane, Mona; Chowdhury, Rumman; Havens, John C.; Lazovich, Tomo; Rincon Alba, Luis
Abstract: Artificial intelligence (AI) systems are increasingly deployed in the public sector. As these technologies can harm citizens and pose a risk to society, existing public procurement processes and standards are in urgent need of revision and innovation. This issue is particularly pressing in the context of recession-induced budget constraints and increasing regulatory pressures. The AI Procurement Primer sets out to equip individuals, teams, and organizations with the knowledge and tools they need to kick-off procurement innovation as it is relevant to their field and circumstances. To do so, it first sets the scene by examining the histories and current issues related to procurement and AI. It then outlines six tension points that emerge in the context of procurement and AI - definitions, process, incentives, institutional structures, technology infrastructure, and liabilities - each of which are paired with a set of questions that can help address these tension points. The primer also outlines five narrative traps that can hinder equitable innovation in AI procurement, alongside strategies to avoid these traps. The primer closes with four calls for action as concrete steps that can be taken to create environments in which AI procurement innovation can happen, namely to re-define the process, create meaningful transparency, build a network, and cultivate talent.</description>
    <dc:date>2021-06-28T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://hdl.handle.net/2451/61227">
    <title>Metadata: Systematic review and meta-analysis of the persistence and disinfection of human coronaviruses and their viral surrogates in water and wastewater</title>
    <link>http://hdl.handle.net/2451/61227</link>
    <description>Title: Metadata: Systematic review and meta-analysis of the persistence and disinfection of human coronaviruses and their viral surrogates in water and wastewater
Authors: Silverman, Andrea; Boehm, Alexandria</description>
    <dc:date>2020-01-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

