← back

Go

Why Go is fast, what Rust could learn, and how it compares to TypeScript
Last updated: 2026-02-19

What brings you here?
One sentence

Go compiles in seconds, runs goroutines at 2KB each, pauses GC for <100 microseconds, and ships as a single binary — trading some raw CPU performance for dramatically faster development cycles and simpler operations.

Understand

Go was designed for fast compilation and simple concurrency at Google's scale. These are the technical decisions that make it fast.

3 numbers that matter

6xfaster compilation than Rust (0.23s vs 1.47s for same problem)[1]
2KBinitial goroutine stack size (vs 1-2MB for OS threads) — 500-1000x smaller
<100μstypical GC stop-the-world pause (improved from 300ms in Go 1.0 to sub-ms in Go 1.8)

Why Go compiles so fast

DESIGN DECISIONS
No header files: each package has binary export data, not parsed headers
No cyclic dependencies: dependency graph is always a DAG, enables parallel compilation
No symbol table required: language designed to parse without one
25 keywords only: vs 97+ in C++ — simpler grammar, faster parsing
Unused imports are errors: prevents compilation bloat

Compilation time comparison

LanguageSame ProblemNotes
Go0.23sFastest
C++0.56s~2.4x slower
Rust1.47s~6.4x slower (but catches more at compile time)
Export data: the secret sauce

Unlike C/C++ where the compiler parses thousands of header files, Go packages contain export data: a binary description of exported definitions. Importing a package reads one object file, not transitive dependencies. When imports don't affect exported types, dependent packages don't need recompilation.

Why Go is "fast enough"

Go isn't the fastest language at raw computation. It's 30-200% slower than Rust/C++ in CPU-intensive tasks. But it's fast enough for:

Where Go wins
✓ Compilation speed (2-6x faster)
✓ Development velocity (simpler language)
✓ Deployment (single static binary)
✓ Container startup (20-30% faster)
✓ Concurrency ergonomics (goroutines)
Where Go loses
✗ Raw CPU performance (30%+ slower than Rust)
✗ GC pauses (occasional latency spikes)
✗ Memory overhead (GC requires headroom)
✗ JSON serialization (reflection overhead)
✗ Low-level control (not for kernels)

Maturity map

proven production emerging
GO IN PRODUCTION
proven Container orchestration (Kubernetes, Docker)
proven Infrastructure tools (Terraform, Consul, Vault)
proven Monitoring (Prometheus, Grafana backend)
production Databases (CockroachDB, InfluxDB, TiDB)
production Network services (Cloudflare, Netflix, Uber)
emerging AI/ML infrastructure (not computation)

Terms & Glossary

Hover over highlighted termsTerm TooltipThroughout this page, terms with dotted underlines show definitions when hovered. anywhere on this page for definitions.

Runtime concepts
Goroutine — Lightweight thread managed by Go runtime, 2KB stack.
GMP — Scheduler model: Goroutines, Machine threads, Processors.
Channel — Typed conduit for goroutine communication.
M:N scheduling — Many goroutines multiplexed onto few OS threads.
Work stealing — Idle processors steal goroutines from busy ones.
GOMAXPROCS — Number of OS threads for parallel execution.
Garbage collection
Tri-color GC — Mark-sweep using white/gray/black object states.
Write barrier — Mechanism to track pointer mutations during GC.
STW — Stop-the-world pause (now <100μs typical).
GOGC — GC trigger: new memory allowed as % of live heap.
GOMEMLIMIT — Hard memory ceiling (Go 1.19+).
Pacing — Algorithm balancing GC CPU cost vs memory.
Memory allocation
mcache — Per-P cache (lock-free allocation).
mcentral — Collects spans of a given size class.
mheap — Manages heap at 8KB page granularity.
mspan — Contiguous run of pages for allocation.
Type system
itab — Interface table: type info + method pointers (vtable).
iface — Non-empty interface: {itab*, data*}, 16 bytes.
eface — Empty interface (any): {type*, data*}, 16 bytes.
Devirtualization — Compiler converts interface call to direct call.
GCShape — Type grouping for generics (pointer vs non-pointer).
Method set — Methods callable on a type (differs for T vs *T).
Comparison terms
AOT — Ahead-of-time compilation (Go, Rust, C).
JIT — Just-in-time compilation (V8, JVM).
Monomorphization — Generating code per concrete type (Rust).
Type erasure — Removing type info at compile time (TypeScript).
Rust async & dispatch
dyn Trait — Rust's dynamic dispatch via fat pointer (data + vtable).
Fat pointer — 16-byte pointer pair: data pointer + vtable/len pointer.
Colored functions — Async/sync distinction that infects the call stack.
async_trait — Crate that boxes Futures to enable dyn async traits.
Send — Rust marker trait: type can be moved between threads.
Sync — Rust marker trait: type can be shared between threads.
RPITIT — Return-position impl Trait in traits (Rust 1.75+).

Decide

When to use Go vs alternatives.

Should I use Go or Rust?
  • Go if: web services, APIs, microservices, CLI tools, rapid prototyping, team velocity matters
  • Rust if: systems programming, HFT, real-time systems, WebAssembly, embedded, sub-ms latency requirements
  • Both: Go for services, Rust for performance-critical components (hybrid is common)
Should I migrate from Node.js to Go?
  • Yes if: CPU-bound workloads, high concurrency (>10k connections), memory constraints, cold start latency matters
  • Maybe not if: I/O-bound only, team knows only JavaScript, extensive npm ecosystem dependencies
  • Results seen: Uber: 50% CPU reduction. SimilarWeb: 28x faster, 10x less resources.
When is Go NOT the right choice?
  • Hard real-time: GC pauses (even <100μs) may be unacceptable
  • Extreme CPU optimization: Rust/C++ beat Go by 30-200%
  • Embedded/kernel: No runtime allowed
  • Heavy number crunching: Python (NumPy/CUDA) or Rust better
How do I tune Go performance?
  • GC pressure: Increase GOGC (default 100) for less CPU, more memory
  • Memory ceiling: Use GOMEMLIMIT (Go 1.19+) in containers
  • Profiling: Built-in pprof, runtime/trace for goroutine analysis
  • Allocation: Pool objects, avoid small allocations in hot paths

Runtime Deep Dive

GMP Scheduler Model

Go's scheduler multiplexes millions of goroutines onto a small pool of OS threads.

GMP COMPONENTS
G (Goroutine): user-space thread, 2KB stack, grows dynamically
States: Waiting, Runnable, Executing
Contains: stack, instruction pointer, state
M (Machine): OS thread wrapper
What the OS scheduler sees
Pool of threads reused by runtime
P (Processor): logical scheduler unit
Count = GOMAXPROCS (default: CPU cores)
Maintains Local Run Queue (LRQ) of goroutines
Enables lock-free fast-path allocation

Scheduling flow

StepWhat happens
1go func() creates G, places in P's Local Run Queue
2M bound to P fetches G from LRQ
3If LRQ empty, check Global Run Queue
4If GRQ empty, work steal from other P's queues
5G blocks on syscall: M detaches from P, P picked up by another M
Preemption evolution

Before Go 1.14: Cooperative only. CPU-bound goroutines could freeze others.
Go 1.14+: Asynchronous preemption via SIGURG signal. Goroutines running >10ms get preempted at safe points.

Garbage Collector

Go uses a concurrent tri-color mark-and-sweepTri-Color GCObjects classified as White (unvisited), Gray (reachable, children not scanned), Black (fully scanned). White objects collected. collector optimized for low latency.

GC evolution

Go 1.0-1.4
Stop-the-world GC. 300-400ms pauses.
Go 1.5 (2015)
Concurrent GC introduced. 30-40ms pauses.
Go 1.8 (2017)
Hybrid write barrier. <1ms pauses.
Go 1.19 (2022)
GOMEMLIMIT added for memory-constrained environments.
Go 1.25 (2025)
Green Tea GC. 10-40% less GC overhead.

GC tuning

ParameterDefaultEffect
GOGC100% growth before GC. Higher = less CPU, more memory.
GOMEMLIMITunlimitedHard memory ceiling. Triggers GC when approached.
Cloudflare achieved 22x performance gains by setting GOGC=11300 for CPU-bound workloads. Extreme but shows the lever's power.

Why goroutines are cheap

ResourceGoroutineOS ThreadFactor
Initial stack2KB1-2MB500-1000x smaller
Context switch~100ns (3 registers)~1-10μs (50+ registers)10-100x faster
Creation cost~300ns~10μs+30x+ faster
Max concurrentMillions possible~10k practical100x+ more

With 2KB stacks: 0.5 million goroutines per GB of RAM.

Type System & Interface Dispatch

Go's type system is deceptively simple on the surface but has sophisticated runtime machinery underneath. Understanding it explains both Go's power and its tradeoffs vs Rust and TypeScript.

Hybrid typing: structural + nominal

Nominal typing

Named types are distinct even with same underlying type:

type Celsius float64
type Fahrenheit float64
// Cannot assign Celsius to Fahrenheit
// Must explicitly convert
Structural typing

Interfaces are satisfied implicitly by method signature:

type Reader interface {
    Read(p []byte) (n int, err error)
}
// *os.File satisfies Reader
// No "implements" keyword needed

Runtime type representation

Every type has an abi.Type descriptor in the binary (from internal/abi/type.go):

abi.Type STRUCTURE
Size_ — size in bytes
PtrBytes — prefix bytes containing pointers (for GC scanning)
Hash — pre-computed hash for O(1) type identity checks
Kind_ — category: Int, Struct, Interface, Pointer, etc.
Equal — function pointer for equality comparison
GCData — pointer bitmap for garbage collector

Key flag: TFlagDirectIface — when set, concrete value stored directly in interface (not pointer to heap). Used for pointers and single-word values.

Interface memory layout

Interfaces are fat pointers — 16 bytes on 64-bit systems:

Interface TypeStructureSize
Non-empty (io.Reader) iface { tab *itab; data unsafe.Pointer } 16 bytes
Empty (any) eface { _type *_type; data unsafe.Pointer } 16 bytes

Empty interfaces (any/interface{}) skip the itab lookup since there are no methods to dispatch.

The itab: Go's virtual table

The itabInterface TableCore dispatch structure containing interface type, concrete type, and method function pointers. Cached globally for reuse. (interface table) is Go's mechanism for dynamic dispatch:

type itab struct {
    inter *interfacetype  // Interface being satisfied
    _type *_type          // Concrete type implementing it
    hash  uint32          // Copy of _type.hash (fast type switches)
    _     [4]byte         // Padding
    fun   [1]uintptr      // Variable-sized method table (vtable)
}
The fun array trick

fun is declared as [1]uintptr but the compiler allocates space for all interface methods. Runtime accesses via pointer arithmetic, bypassing bounds checking. fun[0] == 0 means "type doesn't implement interface."

How Go decides which method to call

Compile time
Compiler assigns each interface method a fixed index. Read might be index 0, Write index 1.
First assignment
When var r io.Reader = f runs, runtime calls getitab() to build/lookup the itab.
itab construction
Walk interface and concrete type method lists (both pre-sorted by name hash). O(n+m) matching. Fill fun[i] with concrete method addresses.
Cache
Store itab in global hash table. Future assignments with same type pair are O(1) lookup.
Method call
r.Read(buf) → load itab.fun[0] → call with r.data as receiver. One pointer deref + indexed load.

Assembly view of interface call

MOVQ    (r.tab), DX        // Load itab pointer
MOVQ    24(DX), AX         // Load fun[0] (method at offset 24)
MOVQ    8(r), CX           // Load r.data (concrete value)
CALL    AX                 // Jump to method

Offset 24 = sizeof(inter) + sizeof(_type) + sizeof(hash) + padding = 8+8+4+4.

Method set rules

Receiver TypeMethod Set Contains
Value TMethods with receiver T only
Pointer *TMethods with receiver T and *T
Why this matters for interfaces
type Stringer interface { String() string }

type S struct{}
func (s *S) String() string { return "S" }

var _ Stringer = &S{}   // OK: *S has String
var _ Stringer = S{}    // ERROR: S doesn't have String

Values in interfaces are not addressable, so pointer-receiver methods can't be called on them.

Compiler optimizations

Devirtualization

When the compiler can prove the concrete type, it converts interface calls to direct calls:

h := sha1.New()      // Returns hash.Hash, but compiler knows it's *sha1.digest
h.Write(data)        // Converted to direct call: sha1.(*digest).Write

Check with: go build -gcflags='-m'

Profile-Guided Optimization (Go 1.21+)

PGO transforms hot interface calls based on runtime profile data:

// Before PGO
r.Read(buf)

// After PGO (if profile shows *os.File dominates)
if f, ok := r.(*os.File); ok {
    f.Read(buf)      // Direct call, inlinable
} else {
    r.Read(buf)      // Fallback indirect call
}

Result: 2-14% CPU reduction typical.

Performance cost breakdown

OperationCostNotes
Direct call~1.6 nsBaseline
Interface call~15 nsIncludes potential allocation
itab lookup (cached)~0.15 nsGlobal hash table
Type assertion~1 nsPointer comparison
Type switch~2-5 nsHash + comparison
Hidden costs: heap allocation from boxing, lost inlining through interface boundary, GC pressure from interface allocations. Interface calls can be 10x slower than direct calls in microbenchmarks.

Common pitfalls

1. The nil interface trap

func maybeError() error {
    var p *os.PathError = nil
    return p  // Returns (type=*os.PathError, data=nil)
}

err := maybeError()
fmt.Println(err == nil)  // false! Interface has type but nil data

An interface is nil only when both tab (or _type) AND data are nil.

Fix: return the interface type directly
func maybeError() error {
    var p *os.PathError = nil
    if p == nil {
        return nil  // Returns (type=nil, data=nil)
    }
    return p
}

2. Escape analysis failure

func process(r io.Reader) {
    buf := make([]byte, 1024)  // Escapes to heap!
    r.Read(buf)                // Compiler can't analyze through interface
}

Compiler doesn't know concrete Read implementation → assumes buffer could be retained → heap allocation.

VersionTimeAllocations
With interface~24.5 ns/op1 alloc
With concrete type~5.5 ns/op0 alloc

3. Type assertion panics

m := map[string]any{"count": 42}
count := m["count"].(int64)  // PANIC: 42 is int, not int64

// JSON unmarshaling is worse:
var data map[string]any
json.Unmarshal([]byte(`{"count":42}`), &data)
count := data["count"].(int)  // PANIC: it's float64!

Fix: Use comma-ok form or type switch.

Comparison: Go vs Rust vs TypeScript

AspectGoRustTypeScript
Interface model Structural (implicit) Nominal (explicit impl) Structural (implicit)
Dispatch itab vtable at runtime Monomorphization OR dyn vtable None (erased at compile)
Runtime type info Full (reflect pkg) Limited (TypeId) None (erased)
Generic dispatch GCShape stenciling + dict Full monomorphization N/A (erased)
Null safety nil interface trap Option type (no null) null/undefined
Performance ~15 ns/interface call 0 ns (monomorph) or ~3 ns (dyn) V8-dependent

Rust's approach

// Static dispatch (monomorphization) - zero cost
fn process<R: Read>(r: R) { ... }

// Dynamic dispatch (vtable) - explicit opt-in
fn process(r: &dyn Read) { ... }

Rust advantage: You choose. Default is zero-cost. dyn vtable is ~3 ns (smaller than Go's itab).
Rust disadvantage: Monomorphization causes binary bloat and longer compile times.

TypeScript's approach

interface Reader {
    read(buf: Uint8Array): number;
}
// Types completely erased at runtime
// API response typed as User is actually `any`

TypeScript problem: No runtime type checking. I/O boundaries are unsafe. Need Zod or similar for validation.

Why Go's tradeoff works

The Go philosophy

Go accepts ~15 ns interface overhead in exchange for:

  • Implicit satisfaction: No need to declare implements
  • Full runtime reflection: reflect package works on all types
  • Simpler mental model: No lifetime annotations on trait objects
  • Faster compilation: No monomorphization explosion

For most server workloads, 15 ns per interface call is noise compared to network latency (1-100 ms).

Generics: Go's partial monomorphization

Go 1.18+ generics use GCShape stenciling with dictionaries — a hybrid approach:

ApproachGoRust
Strategy Group types by "GC shape" (pointer vs non-pointer) Full monomorphization per concrete type
Code generated One per shape + runtime dictionary One per concrete type
Binary size Smaller Larger (can be huge)
Compile time Faster Slower
Runtime perf Slower (dictionary indirection) Faster (fully specialized)
Go's generics can make code slower than pre-generics interface-based code in some cases. The compiler cannot yet generate shape-specialized versions that inline method calls through pointers.

Go vs Rust

Different philosophies: Go prioritizes developer velocity, Rust prioritizes correctness.

Compilation speed

Go: seconds
✓ 0.23s for benchmarks
✓ 1-30s for large projects
✓ No LLVM dependency
✓ Export data, not headers
Rust: minutes
✗ 1.47s for benchmarks (6x slower)
✗ 30s-10m for large projects
✗ LLVM takes 65-84% of build time
✗ Monomorphization costs

Runtime performance

MetricGoRustWinner
CPU-intensive1x baseline1.3-2x fasterRust
Binary trees bench1x12x fasterRust
Web service RPS2,001 RPS3,887 RPSRust
Memory usageHigher (GC)30-50% lessRust
Latency consistencyGC spikesDeterministicRust
Development speedDays to productiveMonths to proficientGo

Rust's dynamic dispatch deep dive

To understand why Go and Rust make different tradeoffs, we need to see how Rust handles dynamic dispatch and how it interacts with async.

Rust vtable vs Go itab

Go: itab (always dynamic)
// Interface always uses itab
var r io.Reader = file
r.Read(buf)  // vtable lookup

16 bytes: {tab *itab, data *T}

Global hash table caches itabs by (interface, concrete) pair.

Rust: dyn Trait (opt-in dynamic)
// Static dispatch (default)
fn process<R: Read>(r: R) { }

// Dynamic dispatch (explicit)
fn process(r: &dyn Read) { }

16 bytes: {data: *T, vtable: *const ()}

Vtable is statically allocated, no runtime lookup.

Rust vtable structure

Rust's vtable layout is simpler than Go's itab:

// Rust vtable layout (conceptual)
struct Vtable {
    drop_in_place: fn(*mut ()),     // Destructor
    size: usize,                     // sizeof(T)
    align: usize,                    // alignof(T)
    method_0: fn(*const ()) -> R,   // First trait method
    method_1: fn(*const ()) -> R,   // Second trait method...
}

Key differences:

Dispatch performance comparison

OperationGoRust dynRust impl
Single call overhead~2 ns (cached)~2-3 ns~0.6 ns
First-time setup15-45 ns (itab)0 ns (static vtable)0 ns
Inlining possibleNoNoYes
Branch predictionIndirect jumpIndirect jumpDirect call
Cache localityPointer chasePointer chaseInline

Real-world benchmark: 20M element iteration

ApproachTimevs BaselineWhy
Rust impl Trait (static)64 ms1.0x (baseline)Inlined, zero dispatch
Rust &dyn Trait216 ms3.4x slowerVtable lookup per call
Go interface~250 ms*~3.9x sloweritab + potential boxing

*Estimated based on similar workloads; actual varies by GC pressure and escape analysis.

Why static dispatch wins

At 20M iterations, even 2 ns per call adds up to 40 ms. But the real cost is lost inlining: static dispatch lets the compiler inline the method body, eliminate bounds checks, and apply loop optimizations. Dynamic dispatch (Go or Rust dyn) blocks all of this. In hot loops, this is the difference between 64 ms and 216+ ms.

The three dispatch tiers

RUST: impl Trait (MONOMORPHIZATION)
Compiler generates specialized code per concrete type
Method calls become direct jumps, often inlined
Cost: 0 ns overhead, but larger binaries
RUST: &dyn Trait (VTABLE)
Fat pointer: data ptr + vtable ptr (16 bytes)
Vtable statically allocated at compile time
Cost: ~2-3 ns per call (indirect jump)
GO: interface{} (ITAB)
Fat pointer: itab ptr + data ptr (16 bytes)
itab constructed at runtime, cached globally
Cost: ~2 ns cached, 15-45 ns first time
Extra cost: potential heap allocation (boxing)

When each matters

ScenarioBest ChoiceWhy
Hot inner loop (millions of calls)Rust impl3-4x faster than dyn
Heterogeneous collectionRust dyn / GoMust store different types
Plugin architectureRust dyn / GoTypes not known at compile time
API boundariesGo / Rust dynAbstraction more important than speed
Rapid prototypingGoNo need to choose dispatch strategy
Go's simplicity comes at a cost: you can't opt into static dispatch for performance-critical paths. Every interface call pays the dynamic dispatch tax. Rust lets you choose per call site, but you must think about it.

The colored functions problem

The deepest architectural difference between Go and Rust concurrency is "function colors."

Go: No colors
// All functions are the same "color"
func process(items []int) {
    for _, item := range items {
        go handleItem(item)  // Just spawn
    }
}

func handleItem(x int) {
    // Can do I/O, blocking, anything
    time.Sleep(time.Second)
    fmt.Println(x)
}

Goroutines are invisible to the type system. Any function can spawn concurrent work.

Rust: Two colors
// Async functions are "colored"
async fn process(items: Vec<i32>) {
    for item in items {
        handle_item(item).await;  // Must await
    }
}

async fn handle_item(x: i32) {
    tokio::time::sleep(Duration::from_secs(1)).await;
    println!("{}", x);
}

Async infects the call stack. Can't call async from sync without runtime.

Why colors matter

// Rust: Can't do this!
fn sync_function() {
    // ERROR: `await` is only allowed inside `async` functions
    async_database_query().await;
}

// Must propagate async up the call stack
async fn sync_function() {        // Now async
    async_database_query().await;
}

// Or use block_on (spawns new runtime, expensive)
fn sync_function() {
    tokio::runtime::Runtime::new()
        .unwrap()
        .block_on(async_database_query());
}

Go avoids this entirely: The scheduler handles blocking transparently. When a goroutine blocks on I/O, the runtime parks it and runs another — no code changes needed.

Async traits: Rust's ongoing challenge

Combining async with dyn Trait has been one of Rust's hardest problems.

The problem

// This doesn't work directly
trait Database {
    async fn query(&self) -> Vec<Row>;  // Returns impl Future, size unknown!
}

// Can't put in vtable: Future size varies per implementation

Why? Async functions return impl Future — the concrete Future type differs per implementation. Vtables need fixed-size entries.

Solution 1: async_trait crate (pre-1.75)

#[async_trait]
trait Database {
    async fn query(&self) -> Vec<Row>;
}

// Expands to:
trait Database {
    fn query(&self) -> Pin<Box<dyn Future<Output = Vec<Row>> + Send>>;
}

Cost: Every call allocates a Box on the heap. ~10-50 ns overhead per call.

Solution 2: Return-Position Impl Trait in Traits (Rust 1.75+)

// Now works natively! (Rust 1.75, Dec 2023)
trait Database {
    fn query(&self) -> impl Future<Output = Vec<Row>>;
}

// But: still can't use with dyn Database easily
// Need trait_variant or manual boxing for dynamic dispatch

Still evolving: async fn in traits landed, but dyn Trait with async methods requires trait_variant or explicit boxing.

Async dispatch performance

ApproachAllocationOverheadUse Case
Static (impl Trait)None~0 nsKnown type at compile time
async_trait (boxed)Per call~10-50 nsdyn Trait needed
Go interfaceSometimes~2-15 nsAlways (no static option)

Send + Sync: Rust's async complexity tax

Rust async functions capture their environment. Moving Futures across threads requires proving thread safety.

// This fails:
async fn process(data: &RefCell<Data>) {
    // RefCell is not Sync, can't hold across .await
    let guard = data.borrow();
    async_operation().await;  // ERROR: future is not Send
    println!("{:?}", guard);
}

// Go equivalent just works:
func process(data *Data) {
    mu.Lock()
    defer mu.Unlock()
    asyncOperation()  // Blocks, but goroutine handles it
    fmt.Printf("%v\n", data)
}

Rust forces you to think about:

Go hides this: The runtime manages goroutine migration. Data races are possible but not compile errors.

The tradeoff in practice

Rust: Complex to write, impossible to have data races (in safe code). Compiler errors can be cryptic for async + lifetimes.
Go: Simple to write, race detector catches issues at runtime. Easier to write, easier to have subtle bugs.

What Rust could borrow from Go

QUICK WINS
Blessed async runtime recommendation (Tokio already dominates)
Cranelift as default for debug builds (3.5x faster)
Official "extended std" bundle (http, json, random)
Cargo script stabilization for single-file programs
HARDER CHANGES
Simpler "learning mode" subset (hide lifetimes initially)
Better colored function bridging (async/sync interop)
GC-like ergonomics via pervasive Arc for prototyping
Green threads runtime option (like Go's goroutines)
Why Go's simplicity is hard to replicate

Go's "no colors" comes from its runtime design: the scheduler intercepts all blocking operations. Rust's zero-cost abstraction philosophy means you manage concurrency — no hidden runtime magic. This is a fundamental philosophical difference, not a missing feature. Rust trades simplicity for control; Go trades control for simplicity.

Philosophical differences

Go philosophy
  • "Get it done. Simplicity and pragmatism over perfection."
  • 25 keywords, minimal features, fast compilation
  • GC: simpler mental model, accept some overhead
  • Generics added late (Go 1.18), intentionally limited
Rust philosophy
  • "Get it right. Correctness and safety. Let the compiler catch bugs."
  • Complex type system, steep learning curve
  • No GC: manual ownership, deterministic performance
  • Powerful generics with monomorphization

When to use each

Use CaseGoRust
Web servicesBestGood
CLI toolsBestGood
MicroservicesBestGood
Rapid prototypingBestSlower
Systems programmingAdequateBest
Real-time systemsGC concernBest
WebAssemblyPossibleBest
EmbeddedNoBest

2025 trend: Hybrid stacks are common. Go for services, Rust for performance-critical components.

Go vs TypeScript

Compiled native code vs JIT-compiled JavaScript.

Execution model

Go: AOT compilation
✓ Native machine code
✓ No runtime required
✓ 50ms cold start (AWS Lambda)
✓ Consistent performance from start
TypeScript/Node: JIT compilation
✗ V8 engine required
✗ 170ms cold start (3x slower)
✗ Needs warmup for peak performance
✗ Deoptimization possible

Performance benchmarks

MetricGoNode.jsFactor
HTTP throughput180k+ req/s<40k req/s4.5x
CPU-bound tasks1x2.6-30x slower2.6-30x
Memory (high load)<150MB>280MB2x less
Concurrent connections100k+ efficient~30k before overhead3x+
Cold start50ms170ms3x faster

Type system differences

FeatureGoTypeScript
Type checkingCompile-time, enforcedCompile-time, erased
Runtime type infoFull (reflect package)None (erased)
Nil/null safetyNil panics possibleSame (null/undefined)
SoundnessSound (no any)Unsound (any escape)
TypeScript's type erasure creates I/O vulnerabilities

TypeScript types are removed at compile time. API responses typed as User are just any at runtime. Go's reflect package provides full runtime type introspection. Use Zod or similar for runtime validation in TypeScript.

Concurrency models

AspectGoNode.js
ModelGoroutines + channelsEvent loop + async/await
CPU parallelismAutomatic (all cores)Worker threads (explicit)
Unit memory2KB per goroutine~1MB per worker thread
CPU-bound workNative parallelBlocks event loop
I/O-bound workExcellentExcellent

Real-world migrations

Uber 2016
Geofence service (highest QPS)
  • Node.js struggled with CPU-intensive point-in-polygon
  • Go: 50% CPU reduction
  • 170,000 queries/second at peak
  • 99.99% uptime achieved
SimilarWeb
Data processing service
  • 28x faster processing
  • CPU: 25GHz → 17GHz
  • Memory: 12.5GB → 8.5GB
  • 10x overall resource reduction
Microsoft: TypeScript Compiler → Go 2025
Project Corsa

Microsoft is rewriting the TypeScript compiler in Go for 10x faster compilation:

ProjectBeforeAfterSpeedup
VS Code (1.5M lines)77.8s7.5s10.4x
Playwright11.1s1.1s10.1x
TypeORM17.5s1.3s13.5x
Editor load time9.6s1.2s8x

Why Go over Rust: "Programming style closely resembles existing TypeScript codebase. Faster port timeline (1 year vs years for Rust)."

When to choose

Choose Go when:
  • CPU-bound workloads or high concurrency
  • Cold start latency matters (serverless)
  • Memory constrained environment
  • Need single binary deployment
  • Building infrastructure/DevOps tools
Choose TypeScript/Node when:
  • I/O-bound workloads (APIs calling other APIs)
  • Full-stack JS team (shared code frontend/backend)
  • Rapid prototyping with npm ecosystem
  • Real-time with Socket.io/WebSockets
  • Development speed is primary concern

Learn

Recommended learning path
1
Tour of Go
Official interactive tutorial. Covers basics in ~2 hours.
2
Effective Go
Idiomatic Go patterns. Read after Tour.
3
Build a CLI tool
Practical project with cobra/viper. Single binary distribution.
4
Build an HTTP service
Use standard library net/http first, then try Gin or Echo.
5
Understand the runtime
GMP scheduler, GC internals, pprof profiling.

Key resources

ResourceURLFor
Official docsgo.dev/docReference
Go Playgroundgo.dev/playQuick experiments
Go by Examplegobyexample.comPattern reference
Ardan Labs Blogardanlabs.com/blogAdvanced internals
TechEmpower Benchmarkstechempower.com/benchmarksPerformance comparison

Popular frameworks

Web frameworks
  • Gin — Most popular, 75k+ stars
  • Echo — High performance, extensible
  • Fiber — Express-inspired, fast
  • fasthttp — Low-level, highest perf
Infrastructure
  • Cobra — CLI framework
  • Viper — Configuration
  • Zap — Structured logging
  • GORM — ORM