32 innovations from Bar-Ilan University, available for licensing, co-investment, or spin-out through BIRAD.
Shehadeh Naim
Reducing the conversion rate from pre-diabetes to diabetes, a personalized approach Numerous studies, such as the DPP trial, The Finnish trial, Tuomilehto et al, and the dA Qing study, have established that lifestyle modifications and weight reduction can significantly decrease the progression from prediabetes to diabetes. However, the large-scale implementation of the DPP trial faced challenges due to the recognition that a uniform approach may not be suitable for everyone, and many patients struggled to adhere to intensive consultation and follow-up. Conversely, it is important to acknowledge the “heterogeneity of treatment effects”, as treatments can vary in effectiveness among individuals (Dahabreh et al, JAMA 2023). Randomized clinical trials like the DPP can determine the overall effectiveness of treatments, but it is essential to understand which participants benefit and which do not. Treatment benefits tend to increase as an individual's baseline risk for developing diabetes increases. For instance, participants at the highest risk in the DPP trial who were randomized to metformin experienced a 22-percentage point reduction in the risk of progression to diabetes at three years compared to the control group, whereas no benefit was observed among participants at the lowest risk. Similarly, those at the highest risk who were assigned to the lifestyle intervention had a 40-percentage point reduction in the risk of developing diabetes at three years compared to Confidential-NS the control group, while the reduction among those at the lowest risk was only 8 percentage points (Sussman et al BMJ 2015). In our intervention, we have implemented a program that focuses on educating individuals about healthy eating and lifestyle habits while providing support for weight loss and increased physical exercise. To ensure adherence to treatment and engagement, we have adopted a personalized approach. This involves active involvement of the family physician and the patient in discussing and selecting the most suitable arm of intervention that aligns with the patient's specific needs and circumstances. Furthermore, we have developed and implemented practical benefit-based tailored calculators, to assess family physicians for considering the prescription of metformin and/or obesity medications: • The first calculator will assess the individual's level of risk and likelihood of responding to metformin according to ADA guidelines. • The second calculator will assess the physician regarding the suitability of the patient to get obesity medication according to his insurance company criteria. We believe that by tailoring the treatment to individual preferences and requirements, we can mitigate the risk of patient's refusal to participate and enhance engagement and reduce drop out. Preliminary data from our approach have demonstrated promising results in terms of feasibility and effectiveness (see attached abstract). To summarize, our strategy for managing prediabetes and lowering the likelihood of progressing to diabetes focuses on personalization and engagement. We recognize the diverse treatment outcomes and the significance of evaluating various risk factors while engaging in conversations with patients to assess their individual risks. By incorporating these principles into our intervention, our aim is to deliver targeted and efficient healthcare to individuals who are at risk of developing diabetes. Confidential-NS A Three arms of intervention: the personalized approach in pre-diabetic patients B Preliminary data in 505 patients (12 months follow-up) - Abstract C Calculator for medical treatment A Diabetes prevention Model see attached file B Comparing Personalized Strategies to Reduce Prediabetes to Diabetes Conversion in Primary Care Clinics Background: Randomized prevention trials show 5.8%–18.3% annual progression from prediabetes to diabetes in control arms control group (1). First-line therapy is lifestyle modification (weight loss and regular exercise) or metformin, with lifestyle change yielding the greater risk reduction. Translating evidence-based diabetes prevention strategies into routine practice necessitates adaptable, patient-centered care models. In January 2022, the northern district of Maccabi Healthcare Services implemented a prediabetes program across three cities in Northen Israel (Nof HaGalil, Shefar'am and Safed). To improve intervention adherence, this program allowed adults diagnosed with prediabetes to collaboratively select, with their physicians, one of three management pathways, including personalized consideration of metformin treatment. The three pathways were: 1) physician-led care, where management and monitoring were exclusively conducted by the primary care physician; 2) a physician care with in-person dietary counseling and nurse consultations, and 3) a remote management program, combining physician care with remote dietary counseling. We aimed to assess the short-term effectiveness of these three pathways in preventing the progression to diabetes. Methods: This retrospective cohort study analyzed anonymized electronic health record data of adults enrolled in the prediabetes program between January 1, 2022, and December 31, 2023. Baseline variables included age, sex, body mass index (BMI), glycated hemoglobin (HbA1c), and metformin initiation status. The primary outcome was progression to diabetes within 12 months (defined as HbA1c ≥ 6.5% or two fasting glucose ≥ 126 mg/dL). Categorical variables were compared using chi-squared or Fisher’s exact tests, while continuous variables were compared using one-way ANOVA or Kruskal–Wallis tests. Results: A total of 430 out of 505 participants completed the intervention trial(85% adherence), (mean ± SD age 59.0 ± 10.9 years; 53% women; BMI 31.2 ± 6.4 kg/m²) were enrolledin the study. In the only physician-led arm (n = 130), in the physician + dietician + nurse arm (n = 212), and in the remote consultation arm (n = 88). Baseline differences were observed for age (61.5 ± 9.1, 56.8 ± 12.1, 58.2 ± 7.7 years, respectively; p < 0.001), BMI (31.2 ± 7.2, 31.8 ± 5.8, 29.2 ± 5.7 kg/m², respectively; p = 0.01), and metformin initiation rates (72.0%, 10.9%, 15.9%, respectively; p < 0.001). After 12 months, 19 participants (4.4%) progressed to diabetes: 8/130 (6.1%) in physician-led, 5/212 (2.4%) in community-based, and 6/88 (6.8%) in remote consultation arm (p = 0.12). The physician + dietician + nurse care arm showed the most effective, nonsignificant relative-risk reductions versus physician-led (RR 0.38, p = 0.08) and remote care (RR 0.35, p = 0.09); physician-led and remote pathways were similar (RR 1.11, p = 0.84). All groups had modest, comparable BMI declines. Conclusions: A personalized intervention strategy over 12 months effectively reduced the progression from prediabetes to diabetes in primary care settings, with patient adherence playing a key role. Allowing patients to choose management pathways can be beneficial in real-world practice. Larger cohorts and longer follow-up are needed to confirm the efficacy of this approach. Reference: 1. Echouffo-Tcheugui JB, Perreault L, Ji L, Dagogo-Jack S. Diagnosis and management of prediabetes: a review. JAMA. 2023 Apr 11;329(14):1206-16. C 2 calculators 1. Metformin indications 2. Obesity treatment availability in HMO’s
Keshet Joseph
Speech sound disorder is a communication disorder in which children have persistent difficulty saying words or sounds correctly. Speech sound production describes the clear articulation of the phonemes (individual sounds) that make up spoken words. Speech sound production requires both the phonological knowledge of speech sounds and the ability to coordinate the jaw, tongue, and lips with breathing and vocalizing in order to produce speech sounds. Most children can say almost all speech sounds correctly by the age of 4 years old. A child who does not pronounce the sounds as expected may have a speech sound disorder, which may include difficulty with the phonological knowledge of speech sounds or the ability to coordinate the movements necessary for speech. These communication difficulties can result in a limited ability to effectively participate in social, academic, or occupational environments. Overall, 2.3% to 24.6% of school-aged children were estimated to have speech delay or speech sound disorders (Black, Vahratian, & Hoffman, 2015; Law, Boyle, Harris, Harkness, & Nye, 2000; Shriberg, Tomblin, & McSweeny, 1999; Wren, Miller, Peters, Emond, & Roulstone, 2016). Children with speech sound disorder are referred to speech therapy, that usually takes around 15 weeks. At first the clinician works with the child on an auditory diagnosis for the distorted sounds at different levels (a sound, a syllable, an expression and a single word). Next, the work is focused on learning the motor skills of sound production and on the location of the articulator organs during the production using visual feedback in addition to auditory feedback. Many research papers show that the most critical part of the treatment is the feedback given to the patient, which helps her or him to develop a correct model of pronunciation.
Schiff Rachel
We utilize AI techniques (deep learning, computer vision) to assess motor visuospatial gestalt and memory abilities. Using a shape reproduction task and analyzing the gap between the original and reproduced shape, such tools enable a robust, reliable and accurate assessment. Moreover, modern tablet technology enables the algorithm to reach a level of accuracy unmet by human diagnosticians.
Meital Gal Tanamy
Vaccines based on live attenuated viruses are the most effective strategy for controlling infections, since they elicit long-lasting natural and effective immune response, but entail challenges as safety and virulence. Hepatitis C Virus (HCV) is a major global health problem, causing liver diseases and liver cancer, with millions infected each year and hundreds of thousands of annual fatalities; but no vaccine is currently available for the virus. Here we present a novel computational approach for the accurate predication of virus attenuation. The approach is based on a rational design of weakened virus variants by insertion of high number of synonymous mutations to disrupt the viral RNA’s secondary structure and regulatory sequences important for the viral life cycle. By measuring RNA levels and virus spread in HCV infection model, we showed that these variants have lower viral fitness relative to the wild-type virus, with gradient of attenuation in concordance with the prediction model. Deep sequencing of replicating viruses demonstrated genomic stability of the attenuated variant. Differential expression analysis and evaluation of cancer-related phenotypes revealed that the variants have a lower pathogenic influence on the host cells, compared to the WT virus. These rationally designed variants may be further considered as a promising direction for a viable HCV vaccine. Importantly, the computational approach described here is based on the most fundamental viral regulatory motifs and therefore may be applied for almost all viruses as a new strategy for vaccine development.
Sharon Gannot
In this work, we propose using diffusion models for speech inpainting, i.e., restoring missing or severely corrupted speech segments obfuscated by severe noise. We leverage the ability of diffusion models to generate realistic speech conditioned on the available context. Our approach progressively refines the reconstructed speech by modeling the missing segments as a denoising process, ensuring smooth transitions and high-fidelity synthesis. As we emphasize semantically correct speech generation, we use automatic speech recognition with a language model (ASR +LM) to guide the speech generation process. Our findings demonstrate the proposed model’s ability to handle a wide range of scenarios, from short gaps to longer missing segments, making it suitable for reconstructing a speech signal corrupted by severe noise, e.g., explosive noise. Our solution has several distinct attributes: 1) It is independent of the speaker, i.e., the algorithm is not limited to specific known speakers; 2) it preserves the speaker’s natural voice style and prosody; and 3) it maintains the natural environment, e.g., reverberation level while eliminating the strong noise.
Singer Gonen
We developed Automatic Complementary Separation Pruning (ACSP), a novel and fully automated pruning method for convolutional neural networks. ACSP integrates the strengths of both structured pruning and activation-based pruning, enabling the efficient removal of entire components such as neurons and channels while leveraging activations to preserve the most relevant components. Our approach is designed specifically for supervised learning tasks, where we construct a graphic space that encodes the separation capabilities of each component with respect to all class pairs. By employing complementary selection principles and utilizing a clustering algorithm, ACSP ensures that the selected components maintain diverse and complementary separation capabilities, reducing redundancy and maintaining high network performance. The method automatically identifies the optimal subset of components for each layer, selecting the minimal subset that preserves performance. This methodology is applicable to any type of network, including large language models.
Sharon Gannot
In this work, we propose a method to imitate the human ability to selectively attend to a single speaker, even in the presence of multiple simultaneous talkers. To achieve this, we propose a novel approach for binaural target speaker extraction that leverages the listener’s Head-Related Transfer Function (HRTF) to isolate the desired speaker. Notably, our method does not rely on speaker embeddings, making it speaker-independent and enabling strong generalization across multiple speech datasets in different languages. We employ a fully complex-valued neural network that operates directly on the complex-valued Short-Time Fourier transform (STFT) of the mixed audio signals. This approach deviates from conventional methods that utilize spectrograms or treat the real and imaginary components of the STFT as separate real-valued inputs. The method is first evaluated in an anechoic, noise-free scenario, where it demonstrates excellent extraction performance while effectively preserving the binaural cues of the target signal. Next, it is tested under mild reverberation conditions. The method remains robust to reverberant conditions, maintaining speech clarity, preserving source directionality, and simultaneously reducing reverberation. Demo-page: https://bi-ctse-hrtf.github.io
Ayal Hendel
We have developed a machine learning based approach for analysing treatment vs control multiplexing-PCR and Next-generation sequencing data to infer and quantify CRISPR genome editing off-target activity. Our methods are now well developed and we are in the final stages of writing a full manuscript. The main contributions of this project are the tool and the statistical modelling approach behind it, which lead to improved performance as we demonstrate. Moreover - we observed that using this type of data also allows us to infer translocation/fusion adverse events. These are not addressed by any of the existing approaches (e.g CRISPResso (1 and 2) and ampliCan). We apply our inference methods to experimental data (5 different on-target loci, in different technical configurations and with a total of 230 off-target sites examined) and demonstrate unique findings that also shed a light on translocation mechanisms related to CRISPR genome editing.
Klein Shmuel Tomi
Huffman coding is one of the best known compression methods. Its static variant encodes each occurrence of a given character in the same way throughout the process, which is known to produce an optimal encoding. A dynamic variant of Huffman coding encodes a given character on the basis of its frequency in the portion of the input file processed so far. Dynamic Huffman coding may be better than static Huffman coding, but it may also be worse. The new method proposed in this invention is provably always better than the static variant of Huffman coding. This is achieved by reversing the direction for the references of the encoded elements to those forming the model of the encoding, from pointing backwards to looking into the future.
Cohen Eliahu
A system and apparatus for high-resolution, high-contrast and low-dose imaging using high-photon energy radiation in the hard X-ray and gamma-ray regimes. The system comprises of A) a high-photon energy source configured to provide an input beam; B) a diffuser configured to induce intensity fluctuations that are stronger than the intensity fluctuations originating from the source i) the diffuser is predesigned and fabricated according to a computer generated topographic map; ii) the diffuser is characterized by a high-resolution imaging system; C) motorized stages to scan the diffuser: D) a detector which can be either a low-resolution detector or a single-pixel detector, and E) a processor configured to receive output intensity measurements, to correlate the output intensity measurements with the intensity fluctuations at the position of the object calculated from the knowledge on the diffuser details, and to use the correlated data to reconstruct an image of the object.
Zalevsky Zeev
Time multiplexing is a super resolution technique that sacrifices time to overcome the resolution reduction obtained because of diffraction. There are many super resolution methods based on time multiplexing, but all of them require a priori knowledge of the time changing encoding mask, which is projected on the object and used to encode and decode the high-resolution information. In this paper, we present a time multiplexing technique that does not require the a priori knowledge on the projected encoding mask. First, the theoretical concept of the technique is demonstrated, then, numerical simulations and experimental results are presented.
Louzoun Yoram
We have developed a novel machine learning for microbiome based classification. The method is based on the translation of samples into images and applying CNN to these images