nCPU: a CPU implemented using neural networks, runs completely on GPU
3 points by mrunix
3 points by mrunix
Could somebody explain what these benchmarks measure?
Current evidence worth paying attention to:
- HumanEval+: qwen3.5:4b 147 -> 154, 9b 144 -> 156, 27b 153 -> 156
- BigCodeBench-Hard: finished qwen3.5:9b 33 -> 49
- Latent-memory proof: on a fresh synthetic memory-aware corpus, the learned memory head improved validation MSE by 83.26% over the zero-delta baseline
Getting real strong "crackpot" vibes from this github user, I don't think it's worth trying to figure out what these "mean"
It is LLM authored and I'm not sure they know, either. Or, if they do, they certainly didn't check the LLM's output before posting.
README claim: Neural nets do exact integer arithmetic (exhaustively verified)
README claim: 1,544 tests across 21 files: exhaustive formal verification, neural ops, neurOS, compute mode, multi-process, MUXLEQ, BusyBox/Alpine, GPU debugging toolkit, and more.
1,544 tests is... quite low for an OS, exhaustive formal verification, a debugging toolkit, etc. So I took a peek at the tests.
https://github.com/robertcprice/nCPU/blob/main/tests/test_math_ops.py: Note that it only claims reasonable approximations, not exact integer arithmetic.
Tests for neural math operations (sin, cos, sqrt, exp, log, atan2). These models are approximation networks — tests verify:
- Models load and produce outputs (no crashes)
- Output shapes and types are correct
- Known values produce reasonable approximations (wide tolerance)
https://github.com/robertcprice/nCPU/blob/main/tests/test_self_optimizing.py
class TestExecutionVerifiedModel(unittest.TestCase):
"""Test ExecutionVerifiedModel"""
def test_create_model(self):
"""Test creating verified model"""
model = ExecutionVerifiedModel(max_retries=2)
self.assertEqual(model.max_retries, 2)
def test_predict_success(self):
"""Test successful prediction"""
model = ExecutionVerifiedModel()
result = model.predict("test input")
self.assertIsNotNone(result)
def test_predict_with_verification(self):
"""Test prediction with custom verification"""
model = ExecutionVerifiedModel()
# Should succeed with verification
result = model.predict(
"test",
verify_fn=lambda x: x is not None
)
self.assertIsNotNone(result)
This seems an incredibly far cry from "exhaustively tested".
Good question, I'm still trying to figure out, Maybe has something to do with this? https://github.com/robertcprice/nCPU/blob/main/docs/SOME_COMPLETE_GUIDE.md https://github.com/robertcprice/nCPU/blob/main/docs/SOME_ARCHITECTURE.md https://github.com/robertcprice/nCPU/blob/main/docs/SOME_SPEC.md https://github.com/robertcprice/nCPU/blob/main/docs/SOME_RESULTS.md