Reluctant Buddha: Fine-Tuned LLM for shitty insights
2025-03-08
Turning a 1B LLM into a deranged cosmic entity running on CPU only.
719 words
|
4 minutes
MiniMax-01: 4 Million Token Context Window!
2025-01-17
Lightning Attention: 4M token window, 456B params. Open-source LLM matches GPT-4, with affordable API. Redefines long-context processing.
536 words
|
3 minutes
Titans: Revolutionizing Long-Context Processing in LLMs
2025-01-17
Innovative architecture combining short-term attention and long-term memory, handling 2M+ tokens. Outperforms GPT-4 in long-context tasks.
599 words
|
3 minutes
Kokoro: A Tiny TTS Model That Beats the Giants
2025-01-15
82M parameters • Beats 1B+ models • Apache 2.0 licensed • Self-hostable • Multiple voices • Easy deployment
293 words
|
1 minutes
Bacteria: THC Production in E. coli
2025-01-14
Genetic engineering strategies and optimization techniques for producing THC using genetically modified E. coli.
4827 words
|
24 minutes