OSIG Oldenburg Newsletter

Moin dear Open Science friends of Oldenburg (and beyond),

Please feel free to share this newsletter and, as always, feel free to join our online biweekly meetings (Tuesdays 10:30-11:30 a.m. on uneven calendar weeks in the study group and A7-0-36: Next meeting on 10th of March) that are open to everyone.

Previous newsletter issues can be retrieved from the OSIG website.

  • Welcome to our new member: A warm welcome to Cris, a new postdoc from the Biological psychology lab funded under the Hearing4All.connect project. His research explores how the brain processes uncertainty, specifically looking at how hearing loss makes sensory information less reliable. He aims to test how this uncertainty affects the way people value and choose to engage in social interactions. We are looking forward to our discussion about open science and potential collaboration projects in OSIG.
  • Multiverse analysis talk at Charité Berlin: Cassie recently gave a talk at Charité Berlin titled 'Exploring the Multiverse: Transparency, Uncertainty, and Robustness in Data Analysis'. Presented as part of the QUEST Seminar on Responsible Research. By systematically evaluating all defensible analysis pipelines, this method reports the robustness of results, making analytical uncertainty transparent. Catch up on the materials here.
  • Open science 101: Cassie and Micha led an Introduction to Open Science session during the department colloquium. They covered core components like open data, open code, open materials, open access, and preregistration. Materials will be coming soon via a new OSIG resources repository.
  • GRN ReproducibiliTEA journal club: Sumbul led our very first GRN ReproducibiliTEA journal club in December 2025; originating at the University of Oxford in 2018, ReproducibiliTea is a grassroots initiative that has expanded to over 100 institutions to create informal spaces for discussing research transparency over tea. We kicked things off by discussing “FAIRly Big: Reproducible Large-Scale Data Processing” (Wagner et al., 2022, Scientific Data), focusing on reproducible processing for large datasets using DataLad and containers. The session prompted a great discussion on privacy constraints, infrastructure limits, and real-world computational reproducibility. It was exactly the open, curious, and constructive environment we hoped to build. Students, doctoral researchers, and staff are warmly invited to join future sessions. Keep an eye on the mailing list for dates, and feel free to suggest papers or host a session. See you at the next tea (schedule coming soon)!
  • PracticalMEEG workshop: Karel ran a workshop at PracticalMEEG 2025 covering MEGqc, a new open-source pipeline designed for fast, standardized, and fully reproducible quality assessment. The materials are available here and here.
  • Karel with his poster at PracticalMEEG workshop
    Karel with his poster during the PracticalMEEG workshop
  • Practical project open science award: For the first time, OSIG is awarding a Practical Project Open Science Prize. We want to recognize practical projects in the Neurocognitive Psychology programme that strongly engage with open science principles. Generously sponsored by Prof. Andrea Hildebrandt, the €50 prize will go to one outstanding student project. Practical project students can apply by Monday, March 16, 2026, using this form.
  • Clarifying open scholarship with Re-SearchTerms: Anna Yi Leung (LMU Munich) and Daniel Kristanto are co-leading Re-SearchTerms, a FORRT-supported project building digital infrastructure to improve conceptual clarity in our field. Following their recently accepted paper in Meta-Psychology, the project tackles the inconsistent definitions of common open scholarship terms. They built an interactive Shiny web application that lets users compare definitions from various sources to see how terms shift across contexts. The goal is to help researchers and educators choose their terminology more transparently. Re-SearchTerms is part of FORWARD, a UNESCO-endorsed initiative for the International Decade of Sciences for Sustainable Development.
  • The state of open data: Digital Science, Figshare, and Springer Nature released their latest review on open data sharing. The report tracks the ongoing push toward FAIR data, noting that researchers are increasingly standardizing data processing and metadata to boost accessibility and reproducibility. Read the details here.
  • Department open science related tool publication: Our department recently published several papers on open science tools.
    • COMET Toolbox: As Cassie mentioned in her talk, multiverse analysis is crucial for checking the robustness of results, especially in fMRI dynamic functional connectivity where analysis choices vary widely. To facilitate this, Micha and Carsten developed the COMET toolbox—a comprehensive Python suite for dynamic functional connectivity methods. Read the paper here.
    • Multiverse Sampling: Running a full multiverse analysis can generate millions of pipelines. When that isn't feasible to compute exhaustively, sampling pipelines from the multiverse is a practical alternative. Cassie recently published a paper evaluating different sampling approaches.
    • Tackling the Replicability Crisis: Have we solved the replicability crisis? Following an expert panel, Julius led an opinion paper answering this question for the cognitive neurosciences. A major takeaway is the urgent need to restructure academia's incentive system; curious about their proposed restructuring? Read the preprint here.
  • Love Replication Week: It is almost here! During Love Replications Week, you can learn to conduct reproduction and replication studies, join events, and celebrate replications across all research types. Cassie is giving a talk on 'Multiverse Analysis for Reporting Robustness to Analytical Flexibility' on Monday, March 2; see the full program here.
  • Brain Awareness Week: This global campaign promotes the progress and benefits of brain research. Locally, SCOP (Science Communication of the Department of Psychology) is organizing Oldenburg's brain Awareness Week for the third time on March 16 – 20 with the goal to bring science to the public, focusing on the brain, neurological diseases, and brain health. Check the program here.
  • Open science series: The open science series will return to the department colloquium next semester with new guest speakers. Stay tuned for the speaker lineup and schedule.
  • fMRI workshop: Have you ever wondered whether there are large, open MRI datasets available for your research? We have good news! Amir, Micha, Karel, and Sumbul are organizing an interactive workshop on open neuroimaging datasets, including how to access them, what kinds of assessments and measures they include, and how to get started with basic analyses. The workshop will also feature a hands-on session where participants can explore and analyze data themselves. The event is being planned, so stay tuned for more details!
  • Open access barcamp 2026: Oldenburg is hosting! The Open Access Barcamp 2026 will be held at the BIS Library and Information System on April 29 and 30 as a great space for the Open Access community to exchange ideas, network, and learn. Participants shape the varied program themselves through parallel content sessions and networking items; participation is free, so register here.
  • For this issue’s section on AI in Open Science, we’re looking at a very practical challenge: how to best use AI for research paper writing. As large language models become more integrated into our daily workflows, a major problem has emerged. Specifically, this paper addresses the problem of AI hallucinations and fake citations in academic writing. How can we benefit from AI assistance while ensuring our work remains trustworthy and reproducible?
  • This is where a newly published paper comes in. To tackle these pitfalls, they built a web-based tool that uses a "human-in-the-loop" approach. Instead of letting the model generate text freely, researchers must provide their own citations and outlines before the AI generates any text. The AI is then strictly restricted to using only the provided literature.
  • What makes this especially relevant for our community is that the tool automatically generates a transparency report. This report documents exactly how the text was constructed, making the AI's role fully transparent and the drafting process reproducible. It is a great example of how we can build practical guardrails into AI tools to align them with open science standards!
  • A quick note on process: In the spirit of transparency (and integrating AI tools!), this newsletter was polished for readability with the assistance of Google's Gemini 3 Pro model. All content and final edits were, of course, reviewed and approved by Daniel before publishing.
🏠 Home