Skip to content

Commit 7661a9e

Browse files
Development environment setup (#2)
* Add AGENTS.md with Cursor Cloud development instructions Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Fill portfolio pages with research content from publications - FDN: Complete page with research directions, tools (FDNTB, FLAMO, diff-colorless-FDN, DVN reverb), demos, and 19 publications - Acoustic Illusions: Replace placeholder with content on transfer-plausibility, 6DoF binaural, latency, head-worn transparency, with tools and 13 publications - Measurement: Replace placeholder with swept-sine techniques, noise removal, DecayFitNet, common-slope analysis, with 5 tools and 10 publications - Similarity: Replace placeholder with similarity metrics, optimal transport, source signal similarity, perceptual roughness, with tools and 10 publications - NEW Spatial Audio & Room Transitions: 6DoF rendering, room transitions, common-slope model, velvet noise, with 6 tools and 12 publications - NEW Differentiable Audio & Deep Learning: DDSP, diff-FDN, RIR completion, DecayFitNet, FLAMO, KLANN, with 6 tools and 11 publications - Homepage: increase portfolio count from 5 to 7 Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Add featured images for all portfolio pages - RES: Signal flow diagram with time-varying feedback matrix (from project page) - FDN: Feedback delay network block diagram with echo density (from FDNTB project) - Acoustic Illusions: Inside the Quartet VR musician photo (from project page) - Spatial Audio: ARTSRAM robot twin measurement system (from ARTSRAM paper) - DDSP: Differentiable FDN block diagram (from RIR2FDN paper) - Measurement: Swept-sine spectrogram with noise events (from JASA paper) - Similarity: Generated comparative spectrogram visualization Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Restructure portfolio pages: per-topic sections with inline citations - Merge 'Key Research Directions' and 'Tools/Demos' into unified per-topic sections where each research area includes its links, code, demos inline - Improve bibliography with BibTeX-style inline citations [[N]](#refN) that link to numbered references at the bottom - Rename 'Publications' section to 'References' for consistency - Each topic section now serves as a self-contained gateway with concept, references, code links, and demo links all together Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Fix featured images: convert JPEG to PNG for Hugo compatibility SpatialAudio and FDN featured images had JPEG encoding issues that Hugo's image processor couldn't handle. Converted both to PNG format. Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Update layout, images, and logo - Revert RES featured image to original (unique image from repo) - Remove default blue robot images from AcousticIllusion, Measurement, Similarity - Replace Measurement image with Roomba twin measurement setup photo - Replace Similarity image with Arni room layout photo - Replace hero image with new Artificial Audio logo (welcome.png) - Replace favicon/icon with logo - Change portfolio and course layout from 'showcase' to 'card' tile grid - Add CSS for responsive card grid with hover effects Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> * Fix tile grid layout with masonry view and CSS grid - Switch from card to masonry view for both portfolio and courses - Add CSS grid layout targeting .col-12:has(>.card) for 3-column tiles - Add card styling with border, border-radius, and hover effects - Hide verbose body content in card summaries (only show tagline) - Remove stale .jpg default images that were blocking custom .png files Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Sebastian Jiro Schlecht <SebastianJiroSchlecht@users.noreply.github.com>
1 parent bfb9b1c commit 7661a9e

21 files changed

Lines changed: 606 additions & 45 deletions

File tree

AGENTS.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# AGENTS.md
2+
3+
## Cursor Cloud specific instructions
4+
5+
This is a Hugo static site (v0.135.0 extended) using the Hugo Blox Builder (Wowchemy) "Research Group" theme. There is no backend, database, or runtime server — it is a purely static site generator.
6+
7+
### Key commands
8+
9+
| Action | Command |
10+
|---|---|
11+
| Dev server | `hugo server` (port 1313) |
12+
| Production build | `hugo --gc --minify` |
13+
| Fetch modules | Automatic on `hugo server` / `hugo` build |
14+
15+
### Caveats
16+
17+
- **Do not run `hugo mod get`** — it upgrades module versions and breaks the build due to incompatibilities with the pinned `blox-plugin-decap-cms` version. Hugo auto-fetches the correct module versions from `go.mod`/`go.sum` on build.
18+
- Hugo requires Go (pre-installed) for module resolution. Hugo itself must be installed separately (v0.135.0 extended, see `netlify.toml` for canonical version).
19+
- This codebase has no linter, test suite, or package manager lockfile. The only dependency file is `go.mod`. Validation is done via `hugo --gc --minify` (build) and `hugo server` (dev).
20+
- Content is in `content/` as Markdown/YAML. Config is in `config/_default/`. Edit content files and the dev server live-reloads automatically.

assets/media/icon.png

147 KB
Loading

assets/media/welcome.png

1.2 MB
Loading

assets/scss/template.scss

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,98 @@
77
.cta-group {
88
justify-content: center;
99
}
10+
11+
// Card tile grid for portfolio and courses
12+
.col-12 > .card {
13+
display: inline-block;
14+
vertical-align: top;
15+
}
16+
17+
// Use the parent col-12 that directly contains .card children as a grid container
18+
.col-12:has(> .card) {
19+
display: grid !important;
20+
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
21+
gap: 1.2rem;
22+
}
23+
24+
.col-12 > .card {
25+
border: 1px solid rgba(0,0,0,0.08);
26+
border-radius: 8px;
27+
overflow: hidden;
28+
transition: box-shadow 0.2s, transform 0.2s;
29+
}
30+
31+
.col-12 > .card:hover {
32+
box-shadow: 0 4px 20px rgba(0,0,0,0.12);
33+
transform: translateY(-2px);
34+
}
35+
36+
.col-12 > .card .card-image img {
37+
aspect-ratio: 16/10;
38+
object-fit: cover;
39+
width: 100%;
40+
}
41+
42+
.col-12 > .card .card-text {
43+
padding: 0.6rem 0.8rem;
44+
}
45+
46+
.col-12 > .card .card-text h4 {
47+
font-size: 0.95rem;
48+
margin-bottom: 0.3rem;
49+
}
50+
51+
.col-12 > .card .card-text .article-style {
52+
font-size: 0.82rem;
53+
line-height: 1.4;
54+
}
55+
56+
.col-12 > .card .card-text .article-style p {
57+
margin-bottom: 0.3rem;
58+
}
59+
60+
// Hide the long body content in the card summary - only show the tagline
61+
.col-12 > .card .card-text .article-style h1,
62+
.col-12 > .card .card-text .article-style h2,
63+
.col-12 > .card .card-text .article-style h3,
64+
.col-12 > .card .card-text .article-style hr {
65+
display: none;
66+
}
67+
68+
.col-12 > .card .card-text .article-style p ~ p {
69+
display: none;
70+
}
71+
72+
// Same grid for card-simple view
73+
.col-12:has(> .card-simple) {
74+
display: grid !important;
75+
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
76+
gap: 1.2rem;
77+
}
78+
79+
.col-12 > .card-simple {
80+
border: 1px solid rgba(0,0,0,0.08);
81+
border-radius: 8px;
82+
overflow: hidden;
83+
transition: box-shadow 0.2s, transform 0.2s;
84+
}
85+
86+
.col-12 > .card-simple:hover {
87+
box-shadow: 0 4px 20px rgba(0,0,0,0.12);
88+
transform: translateY(-2px);
89+
}
90+
91+
.col-12 > .card-simple .img-hover-zoom {
92+
aspect-ratio: 16/10;
93+
overflow: hidden;
94+
}
95+
96+
.col-12 > .card-simple .article-banner {
97+
width: 100%;
98+
height: 100%;
99+
object-fit: cover;
100+
}
101+
102+
.col-12 > .card-simple .article-metadata {
103+
display: none;
104+
}

content/_index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ sections:
1010
title: |
1111
Artificial Audio
1212
image:
13-
filename: welcome.jpg
13+
filename: welcome.png
1414
text: |
1515
<br>
1616
@@ -21,7 +21,7 @@ sections:
2121
title: Portfolio
2222
subtitle:
2323
text:
24-
count: 5
24+
count: 7
2525
filters:
2626
author: ''
2727
category: ''
@@ -32,8 +32,8 @@ sections:
3232
order: desc
3333
page_type: portfolio
3434
design:
35-
view: showcase
36-
columns: '2'
35+
view: masonry
36+
columns: '1'
3737

3838
- block: collection
3939
content:
@@ -54,8 +54,8 @@ sections:
5454
sort_by : "Title"
5555
page_type: courses
5656
design:
57-
view: showcase
58-
columns: '2'
57+
view: masonry
58+
columns: '1'
5959

6060
- block: markdown
6161
content:
-4.88 MB
Binary file not shown.
4.65 MB
Loading

content/portfolio/AcousticIllusion/index.md

Lines changed: 77 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,84 @@ title: Acoustic Illusions for Extended Realities
33
date: 2025-09-06
44
---
55

6-
Blend real and virtual sounds seemlessly by creating binaural illusions of acoustic sources.
6+
Blend real and virtual sounds seamlessly by creating binaural illusions of acoustic sources.
77

8-
<!--more-->
8+
---
9+
10+
# Concept
11+
12+
Augmented and mixed reality systems overlay virtual sound onto the physical world. For the illusion to hold, virtual sources must be acoustically indistinguishable from real ones — a challenge that demands accurate binaural rendering, precise spatial reproduction, and an understanding of human perception.
13+
14+
Our research investigates when and why listeners accept virtual sounds as real, developing evaluation paradigms and rendering techniques that push the boundaries of auditory plausibility.
15+
16+
---
17+
18+
# Transfer-Plausibility
19+
20+
We developed the concept of *transfer-plausibility*[[7]](#ref7)[[13]](#ref13): a rigorous framework for evaluating whether virtual sources are accepted as real when both real and virtual sounds coexist. This goes beyond traditional authenticity testing and captures the perceptual demands unique to AR/MR scenarios. Our 3AFC transfer-plausibility test proved more sensitive than alternative evaluation methods, establishing it as a standard for AR audio research.
21+
22+
---
923

10-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.
24+
# Binaural Rendering for 6 Degrees of Freedom
25+
26+
Rendering spatial audio for listeners who can freely move and rotate in a space requires processing recorded Ambisonics sound fields with distance and position information[[1]](#ref1)[[2]](#ref2). Our work addresses source distance modeling and listener navigation through measured sound fields, enabling experiences like [**Inside the Quartet**](https://www.sebastianjiroschlecht.com/project/insidethequartet/) — an immersive experience placing the listener inside a string quartet[[10]](#ref10).
27+
28+
Code: [**SPARTA 6DoFconv**](https://leomccormack.github.io/sparta-site/docs/plugins/sparta-suite/#6dofconv) — plugin for six-degrees-of-freedom convolution with spatial room impulse responses.
29+
30+
Code: [**SRIR Interpolation Toolkit**](https://github.com/thomas-mckenzie/srir_interpolation) — perceptually informed interpolation of spatial room impulse responses between measurement positions.
31+
32+
---
33+
34+
# Latency & Perceptual Thresholds
35+
36+
Low-latency processing is critical for maintaining the auditory illusion in real-time AR. We characterized the latency limits of head-tracked binaural rendering systems and their impact on plausibility[[9]](#ref9), providing practical guidelines for system design.
37+
38+
Code: [**Latency Analyzer**](https://github.com/ahihi/latency-analyzer) — tools for measuring binaural rendering latency.
39+
40+
---
1141

12-
Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.
42+
# Head-Worn Device Transparency
1343

14-
Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.
44+
Wearing headphones or AR glasses disrupts the perception of real sounds. We developed methods for predicting perceptual transparency of head-worn devices[[8]](#ref8), informing the design of passthrough processing that preserves natural listening.
45+
46+
---
47+
48+
# Audiovisual Congruence
49+
50+
How do visual cues interact with spatial audio? We studied whether loudspeaker models or human avatars in VR affect localization performance[[11]](#ref11), revealing the interplay between visual representation and spatial hearing accuracy.
51+
52+
---
53+
54+
# Room Acoustic Memory
55+
56+
Can listeners remember and compare the acoustic character of spaces? Our experiments[[7]](#ref7) investigate how accurately listeners retain room acoustic impressions, informing how quickly AR systems must adapt when transitioning between environments.
57+
58+
---
59+
60+
# Experiences
61+
62+
- [**Inside the Quartet**](https://www.sebastianjiroschlecht.com/project/insidethequartet/) — immersive spatial audio placing the listener inside a string quartet, demonstrating high-quality binaural rendering for musical performance[[10]](#ref10).
63+
64+
- [**Space Walk**](https://www.sebastianjiroschlecht.com/publication/SpaceWalkSound/) — a navigable virtual planetarium for the Oculus Quest with spatialized music[[4]](#ref4), combining stereophonic and immersive sound spatialization.
65+
66+
---
67+
68+
# References
69+
70+
| | Year | Authors | Article |
71+
|---|--------------|---------------|-------------|
72+
|<span id="ref1">[1]</span>| 2018 | A. Plinge, S. J. Schlecht et al. | [Six-degrees-of-freedom binaural audio reproduction of first-order Ambisonics](https://doi.org/10.22032/dbt.39955) |
73+
|<span id="ref2">[2]</span>| 2019 | O. S. Rummukainen, S. J. Schlecht & E. A. P. Habets | [Perceptual study of near-field binaural audio rendering in 6DoF VR](https://doi.org/10.1109/vr.2019.8798177) |
74+
|<span id="ref3">[3]</span>| 2020 | N. Meyer-Kahlen, S. J. Schlecht & T. Lokki | [Fade-in control for feedback delay networks](http://research.spa.aalto.fi/publications/papers/dafx20-fadefdn/) |
75+
|<span id="ref4">[4]</span>| 2021 | A. Mancianti, S. J. Schlecht et al. | [Space Walk — visiting the solar system through an immersive sonic journey in VR](https://doi.org/10.5281/zenodo.5717860) |
76+
|<span id="ref5">[5]</span>| 2021 | N. Meyer-Kahlen, S. J. Schlecht & T. Lokki | [Perceptual roughness of spatially assigned sparse noise for rendering reverberation](https://doi.org/10.1121/10.0007048) |
77+
|<span id="ref6">[6]</span>| 2022 | N. Meyer-Kahlen, S. J. Schlecht & T. Lokki | [Clearly audible room acoustical differences may not reveal where you are in a room](https://doi.org/10.1121/10.0013364) |
78+
|<span id="ref7">[7]</span>| 2022 | N. Meyer-Kahlen, S. J. Schlecht et al. | [Transfer-plausibility of binaural rendering with different real-world references](http://research.spa.aalto.fi/publications/papers/i3da21-motus/) |
79+
|<span id="ref8">[8]</span>| 2022 | P. Lladó, T. McKenzie, N. Meyer-Kahlen & S. J. Schlecht | [Predicting perceptual transparency of head-worn devices](https://doi.org/10.17743/jaes.2022.0024) |
80+
|<span id="ref9">[9]</span>| 2023 | N. Meyer-Kahlen, S. J. Schlecht & T. Lokki | [Latency analysis of binaural rendering systems](https://doi.org/10.17743/jaes.2022.0089) |
81+
|<span id="ref10">[10]</span>| 2023 | N. Meyer-Kahlen et al. | [Inside the Quartet — spatial audio experience](https://www.sebastianjiroschlecht.com/project/insidethequartet/) |
82+
|<span id="ref11">[11]</span>| 2024 | A. Hofmann, N. Meyer-Kahlen, S. J. Schlecht & T. Lokki | [Audiovisual congruence and localization in VR](https://doi.org/10.17743/jaes.2022.0162) |
83+
|<span id="ref12">[12]</span>| 2024 | N. Meyer-Kahlen & S. J. Schlecht | [Directional distribution of the pseudo intensity vector in anisotropic late reverberation](https://doi.org/10.1121/10.0024960) |
84+
|<span id="ref13">[13]</span>| 2024 | N. Meyer-Kahlen, S. J. Schlecht et al. | [Testing auditory illusions in AR: Plausibility, transfer-plausibility, and authenticity](https://doi.org/10.17743/jaes.2022.0178) |
85+
86+
---
30.5 KB
Loading

content/portfolio/DDSP/index.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
---
2+
title: Differentiable Audio Processing & Deep Learning
3+
date: 2025-09-04
4+
---
5+
6+
Bridging classical signal processing with modern machine learning for audio.
7+
8+
---
9+
10+
# Concept
11+
12+
Classical audio signal processing offers transparent, interpretable algorithms — but tuning their parameters to match complex acoustic targets remains an open challenge. Deep learning brings powerful optimization, but often at the cost of interpretability and efficiency.
13+
14+
Our research bridges these worlds through differentiable signal processing (DDSP): embedding classical audio structures (filters, delays, feedback networks) into differentiable computation graphs that can be optimized end-to-end with gradient descent. Alongside this, we develop neural network approaches for tasks where traditional methods fall short.
15+
16+
---
17+
18+
# Differentiable Feedback Delay Networks
19+
20+
Making FDN parameters differentiable allows reverberation to be optimized toward target decay, coloration, or perceptual objectives using gradient-based training. We showed that even tiny FDN configurations produce high-quality colorless reverberation when optimized this way[[3]](#ref3)[[10]](#ref10), and developed RIR2FDN[[6]](#ref6) for automatically synthesizing FDN configurations that match measured room impulse responses.
21+
22+
Code: [**diff-fdn-colorless**](https://github.com/gdalsanto/diff-fdn-colorless) — optimize FDN parameters for spectrally flat reverberation via gradient descent.
23+
24+
Demo: [**Colorless FDN examples**](http://research.spa.aalto.fi/publications/papers/dafx23-colorless-fdn/) — audio comparisons.
25+
26+
Code: [**rir2fdn**](https://github.com/gdalsanto/rir2fdn) — analyze measured RIRs and synthesize matching FDN configurations.
27+
28+
Demo: [**RIR2FDN project page**](http://research.spa.aalto.fi/publications/papers/dafx24-rir2fdn/) — listening examples of RIR-to-FDN conversion.
29+
30+
---
31+
32+
# FLAMO: Differentiable Audio Systems Library
33+
34+
[**FLAMO**](https://github.com/gdalsanto/flamo) (Frequency-sampling Library for Audio-Module Optimization)[[9]](#ref9) is a PyTorch library for building and optimizing differentiable linear time-invariant audio systems. It provides differentiable gains, filters (biquads, state variable filters, graphic EQs), delays, and transforms that can be chained into complex architectures and trained end-to-end.
35+
36+
[Documentation](https://gdalsanto.github.io/flamo) · [PyPI](https://pypi.org/project/flamo/)
37+
38+
---
39+
40+
# Differentiable Active Acoustics
41+
42+
Reverberation enhancement systems form an electro-acoustic feedback loop whose stability is critical. We treat this loop as a differentiable system and optimize stability and performance via gradient descent[[5]](#ref5), opening new possibilities for automated active acoustics design.
43+
44+
Demo: [**Differentiable active acoustics project page**](http://research.spa.aalto.fi/publications/papers/dafx24-diff-aa/) — demonstrations of stability optimization.
45+
46+
---
47+
48+
# Room Impulse Response Completion
49+
50+
Rendering immersive audio in VR and games requires fast RIR generation. [**DECOR**](https://github.com/linjac/rir-completion/) (Deep Exponential Completion Of Room impulse responses)[[8]](#ref8) predicts late reverberation from only the early 50 ms of a measured response — an encoder-decoder network that synthesizes multi-exponential decay envelopes of filtered noise.
51+
52+
Demo: [**RIR completion project page**](https://linjac.github.io/rir-completion/) — interactive examples.
53+
54+
---
55+
56+
# Neural Decay Analysis
57+
58+
[**DecayFitNet**](https://github.com/georg-goetz/DecayFitNet)[[1]](#ref1) is a lightweight neural network that replaces brittle iterative fitting for multi-exponential energy decay estimation. Trained on synthetic data, it provides deterministic inference without manual tuning, validated on over 20,000 real acoustic measurements.
59+
60+
---
61+
62+
# Physical Modeling with Neural Operators
63+
64+
Fourier neural operators[[2]](#ref2) learn to approximate PDE solutions for physical models of musical instruments, enabling real-time sound synthesis that captures the physics of vibrating strings and resonant bodies.
65+
66+
Demo: [**FNO for physical modeling**](https://julian-parker.github.io/DAFX22_FNO/) — Fourier neural operator examples.
67+
68+
---
69+
70+
# KLANN: Knowledge-Leveraging Audio Networks
71+
72+
[**KLANN**](https://github.com/ville14/KLANN)[[7]](#ref7) integrates domain knowledge into neural network architectures for audio processing, combining the efficiency of classical signal processing structures with the flexibility of learned parameters.
73+
74+
Demo: [**KLANN examples**](https://ville14.github.io/KLANN-examples/) — audio processing results.
75+
76+
---
77+
78+
# References
79+
80+
| | Year | Authors | Article |
81+
|---|--------------|---------------|-------------|
82+
|<span id="ref1">[1]</span>| 2022 | G. Götz, S. J. Schlecht & V. Pulkki | [DecayFitNet: neural network for energy decay analysis](https://doi.org/10.1121/10.0013416) |
83+
|<span id="ref2">[2]</span>| 2022 | J. D. Parker, S. J. Schlecht et al. | [Physical modeling with Fourier neural operators](https://julian-parker.github.io/DAFX22_FNO/) |
84+
|<span id="ref3">[3]</span>| 2023 | G. Dal Santo, K. Prawda et al. | [Differentiable feedback delay network for colorless reverberation](http://research.spa.aalto.fi/publications/papers/dafx23-colorless-fdn/) |
85+
|<span id="ref4">[4]</span>| 2023 | L. Luoma, P. Fricker & S. J. Schlecht | [Deep learning for loudspeaker digital twin creation](https://doi.org/10.14627/537740052) |
86+
|<span id="ref5">[5]</span>| 2024 | G. M. De Bortoli, G. Dal Santo et al. | [Differentiable active acoustics: optimizing stability via gradient descent](http://research.spa.aalto.fi/publications/papers/dafx24-diff-aa/) |
87+
|<span id="ref6">[6]</span>| 2024 | G. Dal Santo et al. | [RIR2FDN: Improved room impulse response analysis and synthesis](http://research.spa.aalto.fi/publications/papers/dafx24-rir2fdn/) |
88+
|<span id="ref7">[7]</span>| 2024 | V. Huhtala, L. Juvela & S. J. Schlecht | [KLANN: Knowledge-leveraging artificial neural network](https://doi.org/10.1109/lsp.2024.3389465) |
89+
|<span id="ref8">[8]</span>| 2025 | J. Lin, G. Götz & S. J. Schlecht | [Deep room impulse response completion](https://doi.org/10.1186/s13636-024-00383-1) |
90+
|<span id="ref9">[9]</span>| 2025 | G. Dal Santo et al. | [FLAMO: Frequency-sampling library for audio-module optimization](https://doi.org/10.1109/icassp49660.2025.10888532) |
91+
|<span id="ref10">[10]</span>| 2025 | G. Dal Santo, K. Prawda et al. | [Optimizing tiny colorless feedback delay networks](https://doi.org/10.1186/s13636-025-00401-w) |
92+
|<span id="ref11">[11]</span>| 2025 | M. Scerbo, S. J. Schlecht et al. | [Modeling feedback delay network output equivalences](https://doi.org/10.1109/taslpro.2025.3592322) |
93+
94+
---

0 commit comments

Comments
 (0)