Skip to content

Commit 6350fb9

Browse files
committed
refactor: remove blog posts section from homepage and update research tasks and '5-dollar-llm' blog post content.
1 parent 8fd2cc8 commit 6350fb9

File tree

3 files changed

+10
-44
lines changed

3 files changed

+10
-44
lines changed

RESEARCH.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
1-
### Open Tasks
1+
---
2+
3+
## Open Tasks
24

35
[Does RoPE mess with semantics of the vectors, what would you do differently? ➝](/blog/rope-semantics)
46

5-
Setup research environment for [Back to Basics: Let Denoising Generative Models Denoise](https://arxiv.org/pdf/2511.13720). Add research questions, ideas, topics, etc.
7+
8+
[Make LLM pretraining accessible for everyone ➝](/blog/5-dollar-llm)
69

710

811
---

app/page.tsx

Lines changed: 0 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -87,41 +87,6 @@ export default function Home() {
8787
)}
8888
</div>
8989

90-
{/* Dynamic Blog Posts Section */}
91-
{posts.length > 0 && (
92-
<div className="mt-24 border-t border-white/10 pt-16">
93-
<h2 className="text-3xl font-bold text-[#faf9f6] mb-12 flex items-center gap-3">
94-
<span className="text-2xl">📚</span>
95-
Latest Research Articles
96-
</h2>
97-
<div className="space-y-12">
98-
{posts.map((post) => (
99-
<div key={post.slug} className="group relative">
100-
<div className="flex items-center gap-4 text-sm text-[#faf9f6]/50 mb-3">
101-
<span>{post.date}</span>
102-
</div>
103-
<Link href={`/blog/${post.slug}`} className="block">
104-
<h3 className="text-2xl font-bold text-white mb-3">
105-
{post.title}
106-
</h3>
107-
</Link>
108-
<p className="text-[#faf9f6]/75 text-lg leading-relaxed mb-4">
109-
{post.description}
110-
</p>
111-
<Link
112-
href={`/blog/${post.slug}`}
113-
className="inline-flex items-center gap-2 text-blue-400 hover:text-blue-300 font-medium transition-colors"
114-
>
115-
Read full article
116-
<svg className="w-4 h-4 group-hover:translate-x-1 transition-transform" fill="none" stroke="currentColor" viewBox="0 0 24 24">
117-
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M17 8l4 4m0 0l-4 4m4-4H3" />
118-
</svg>
119-
</Link>
120-
</div>
121-
))}
122-
</div>
123-
</div>
124-
)}
12590
</div>
12691
</main>
12792
</>

blog-posts/5-dollar-llm.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,11 @@
11
---
2-
title: "Pretrain GPT-5 on 1 GPU"
2+
title: "Pretrain GPT-2 on 1 GPU"
33
date: "2026-01-19"
4-
description: "Research for maximum LLM training efficiency"
4+
description: "Make LLM pretraining accessible for everyone"
55
---
66

7-
Researching LLM to make it accessible to anyone.
7+
This [GitHub repo](https://github.com/Open-Superintelligence-Lab/5-dollar-llm) in an LLM that is accessible for anyone to pretrain. We aim to make LLM pretraining more and more accessible to everyone (not less and less, as common false expectation suggests).
88

9-
[github](https://github.com/Open-Superintelligence-Lab/5-dollar-llm)
9+
Currently GPT-1 has already be replicated in this repo. For GPT-2 we just need to increase the data size by 10x, but some architectural improvements would be good.
1010

11-
12-
13-
![Mamba S6 Architecture Diagram](/content/mamba-deep-dive/architecture.png)
11+
Suggest your ideas, we need to empirically measure and validate improvements as described in the repo.

0 commit comments

Comments
 (0)