Swift and Unnoticed

I keep coming back to Swift with the mild frustration of someone who found a great restaurant that nobody else seems to know about. The food is good. The kitchen runs on a coherent philosophy. The location is technically accessible to anyone. And yet somehow, the association with Apple has managed to make the whole thing feel like a house brand rather than a language worth taking seriously on its own terms.

That positioning was set early and it stuck. Swift launched at WWDC 2014 as “Objective-C without the C” — which, to Apple developers at the time, was an enormous pitch. To everyone else, it read as “a language for making iPhone apps.” It went open source in December 2015, Linux support landed the same day, and almost none of it mattered to how the broader developer community categorized it. A language that starts its life as an Apple keynote feature has a branding problem that takes a long time to undo.

The irony is that Swift was designed to be general-purpose from the start. Fast compilation to native code, ARC-based memory management without a garbage collector, C and C++ interoperability baked in, a type system that was clearly influenced by functional languages. None of those are “iOS features.” They’re just features of a well-designed compiled language that happened to have Apple’s name on the launch announcement.


What the language actually is

Swift uses Automatic Reference Counting (ARC) rather than a garbage collector. The difference is in when memory gets freed. With a garbage collector — used by Go, Java, and most other managed languages — the runtime periodically scans for objects that are no longer needed and frees them in batches. That process introduces pauses: brief moments where the program stops to let the collector run. Usually imperceptible, but a real constraint in latency-sensitive environments. ARC works differently: every object has a counter that tracks how many things reference it. When that count drops to zero, the memory is freed immediately, right there, with no separate collection phase. The tradeoff is that ARC inserts bookkeeping overhead on every object assignment — incrementing and decrementing those counters — and it can’t automatically break reference cycles (two objects that reference each other, keeping each other’s count above zero indefinitely), which requires explicit developer intervention using weak or unowned references. In practice, for long-running server processes where latency consistency matters, the predictable cleanup is a better property than the throughput gains a well-tuned garbage collector can achieve at scale.

The type system has genuine expressive power, and generics are worth unpacking because the three languages handle them differently, and the differences have real performance consequences. When you write a generic function — one that works on any type — the compiler has to decide what machine code to actually produce. Swift (and Rust) go all the way: they generate a separate, fully optimized version of the function for every concrete type you use it with. A generic sort used with integers produces one compiled function; used with strings, another. This is called specialization, and it means the compiler can apply the same optimizations it would to hand-written non-generic code — inlining, eliminating branches, the works. The cost is that the final binary contains more code, and compile times are longer. Java does the opposite: it compiles generics once and forgets the specific types at runtime, treating everything as a generic object. Simpler, smaller output, but you lose type-specific optimizations and pay boxing overhead — wrapping primitives like integers into heap objects to pass them around uniformly. Go’s approach, added in 1.18, sits in between: it generates one version of the function per memory shape rather than per type, meaning all pointer types — regardless of what they point to — share the same compiled code. To handle type-specific behavior like method calls, Go passes a runtime dictionary alongside the function. The consequence is that method calls through generic interfaces can’t be inlined, and Go’s generic code can actually be slower than equivalent interface-based code where the compiler had more information to work with. Swift and Rust pay the cost upfront at compile time and produce larger binaries; Go compiles faster and produces smaller output, but leaves some optimization on the table.

Value semantics with structs and enums as first-class citizens means you spend far less time reasoning about accidental mutation than you do in languages where everything is a reference by default — where passing an object to a function means both the caller and the function are looking at the same thing in memory, and either one can change it. Swift 6’s strict concurrency model introduced compile-time actor isolation — the compiler refuses to let you share mutable state across concurrent tasks unsafely, catching data races at build time rather than as mysterious bugs in production.

Error handling in Swift 6 completed a transition that had been in progress for years. The original throws annotation told the caller a function could fail but said nothing about what kind of failure — you knew an error was possible but had to handle any error type generically. Swift 6 added typed throws: throws(MyError) declares the exact error type the function can produce, so the compiler knows precisely what you need to handle and can enforce it. This brings Swift closer to Rust’s approach — where functions that can fail return a Result type that explicitly carries either a success value or a specific error — while keeping the more readable try/catch syntax at the call site instead of requiring you to chain operations through the result type manually. Go’s approach is to return errors as a second value alongside the normal return: every fallible function returns value, error, and callers check if err != nil after every call. It’s explicit, but verbose, and structurally easy to accidentally ignore. Swift’s typed throws lands somewhere more honest than Go and more readable than Rust for most application code.

Performance is where honest accounting matters. Swift compiles via LLVM, so the optimization pipeline is solid — but benchmark data from programming-language-benchmarks.vercel.app (August 2025, AMD EPYC 7763) puts Swift closer to JVM-tier languages than to the systems language tier it aspires to occupy. On binarytrees (memory allocation-heavy), Swift 6.1.2 clocks around 2661ms vs Rust at ~1260ms and Go at 1726ms — roughly 2x behind Rust, trailing Go as well. On fannkuch-redux (CPU-bound), Swift lands at 2469ms against Go’s 724ms, which is over 3x slower. That’s not what you’d expect from a language often marketed as C-adjacent.

The structural reason is ARC. Every time an object is assigned or passed around, Swift increments and decrements reference counts behind the scenes. For workloads that create and destroy a large number of objects rapidly — parsing, tree traversal, anything allocation-heavy — those bookkeeping costs compound. Rust avoids them entirely: its ownership model means the compiler knows at compile time exactly when memory can be freed, with no runtime tracking needed whatsoever. Raw throughput on compute and allocation benchmarks puts Swift closer to Kotlin or C# than to Go, let alone Rust.

Compile times are worth naming honestly. Swift’s compiler does significant work per module — whole-module optimization, type inference across complex generic constraints, protocol conformance resolution — and historically this was a real friction point. Large codebases in the pre-Swift 5 era had builds that were slow enough to affect daily workflow. The situation has improved meaningfully: incremental compilation got better, the compiler got faster, and for a typical CLI or server project the times are perfectly acceptable. But Swift still trails Go, which compiles fast enough that build time is essentially never a consideration, and is roughly comparable to Rust — sometimes faster, sometimes slower depending on generic complexity. If you’re evaluating Swift for a large monorepo or a team where iteration speed matters, compile time is a real variable rather than a footnote.

Where ARC is architecturally interesting is latency predictability. Because memory is freed the moment the reference count hits zero — not whenever the next collection cycle happens to run — there is no pause to schedule and no heap headroom being reserved for the collector. The question of whether that translates to meaningfully better tail latency in production Swift servers doesn’t have a well-documented answer yet; the Swift Forums have an open thread from 2024 specifically asking for real-world data, and the responses are theoretical rather than empirical.


Server-side

Vapor is the main production framework — routing, ORM (Fluent), JWT, templating, built on SwiftNIO’s non-blocking event loop architecture. Hummingbird is the leaner alternative that’s been gaining adoption for projects that want less framework and more direct NIO access.

The practical value proposition for iOS teams is real: share data models, validation logic, and business rules between iOS and backend in the same language, without a translation layer. The Swift Server Workgroup has driven improvements to the language and tooling, and the ecosystem has adopted core observability packages for logging, metrics, and tracing, so the production instrumentation story is no longer a gap. gRPC Swift 2 added first-class structured concurrency support, which makes it a reasonable choice for service-to-service communication in a modern backend stack.

The production evidence for GC pauses as a real operational problem comes from elsewhere in the server world, but it’s directly relevant here. Discord’s 2020 migration from Go to Rust is the clearest case study: their Read States service — tracking read position per user per channel across billions of records, with hundreds of thousands of cache updates per second — was producing 40ms latency spikes every two minutes in Go. The cause was Go’s garbage collector running collection on a large cache. Rust eliminated the pauses entirely. That post isn’t about Swift, but it documents precisely the failure mode that ARC-based memory management structurally avoids. Whether Swift’s server story eventually produces a case study with the same shape remains to be seen.

Cold start time is worth a specific note. Swift applications start quickly with almost no warm-up overhead — a real differentiator compared to JVM-based languages, which pay startup costs when a new instance spins up, even with mitigation strategies like AWS Lambda’s SnapStart. Apple’s Swift on Server documentation describes this as making Swift well-suited for serverless functions and cloud services that get rescheduled onto new machines frequently. Amazon announced integrated Swift support in Amazon Linux and the AWS Lambda Runtime at re:Invent 2025, which makes this a practical option rather than a theoretical one.

There’s also a dedicated server conference — ServerSide.swift — which, while small relative to something like KubeCon, signals that the community is serious and sustained, not hobbyist.


Android

The official Swift SDK for Android was announced October 24, 2025. This isn’t a community hack or a transpilation layer — it’s official tooling from the Swift Android Workgroup, producing native Swift binaries on Android via cross-compilation. The SDK includes swift-java, which generates bindings between Swift and Java in both directions, letting Swift code call into Android’s Java-based APIs without writing manual bridging code between the two runtimes.

The model it enables is share logic, write UI natively per platform. Business rules, data models, networking layers in Swift; native Jetpack Compose on Android, SwiftUI on iOS. That’s a more honest cross-platform story than “write once, run everywhere” — the UI is still written twice, but in the right tool for each platform, and the parts that are actually shared are shared at the source level in one language. Not what Skip does via transpilation to Jetpack Compose, which is a different tradeoff entirely.

Over 25% of packages on the Swift Package Index had already completed Android adaptation by the time of the announcement. The SDK is in nightly preview — not production-ready — but the workgroup has active CI and a vision document in review. The trajectory is clear enough.


CLIs

This is underrated. Swift is genuinely good for writing CLI tools — probably better than most developers realize, because the entry point from Apple tooling has always been Xcode, and nobody associates Xcode with writing a quick command-line utility.

ArgumentParser (open source, Apple maintained) uses Swift’s property wrapper feature — annotations that let you attach behavior to a variable declaration — to derive argument parsing directly from how you declare your command’s structure. You define your command as a type with annotated properties, implement a single run() method, and ArgumentParser generates help text, error messages, shell completion scripts, and man pages from that definition automatically. The ergonomics are close to what you’d expect from a well-designed Rust CLI library like clap, without the complexity of Rust’s lifetime system.

import ArgumentParser

@main struct Greet: ParsableCommand {
    @Argument(help: "Name to greet") var name: String
    @Option(name: .shortAndLong, help: "Number of times") var count: Int = 1

    mutating func run() {
        for _ in 1...count { print("Hello, \(name)!") }
    }
}

Swift Package Manager handles dependencies, cross-compilation to Linux is straightforward, and the resulting binary is self-contained. For macOS-first tooling — the kind you’d write to automate something in your local development environment — Swift beats the alternatives. It’s faster to write than Go for macOS-specific tasks (where you want Foundation APIs), more ergonomic than Rust, and produces a single native binary unlike Node.

Apple’s own toolchain is substantially built in Swift: Swift Package Manager, SourceKit-LSP, the swift-format tool. That’s not marketing copy, it’s the observable state of what Apple ships for developer tooling.


Embedded

Swift’s new Embedded compilation mode produces stripped-down binaries suitable for firmware by disabling language features that require a runtime — things like dynamic type inspection, runtime generics resolution, and existential types (the mechanism that lets you treat different concrete types uniformly through a shared interface). What remains is still recognizably Swift — generics, closures, optionals, structured error handling — but compiled to run on hardware with kilobytes of memory rather than gigabytes. Announced at WWDC 2024, it targets ARM and RISC-V microcontrollers — Raspberry Pi Pico (RP2040), STM32 boards, Nordic nRF52840, ESP32-C6 among the officially supported hardware.

Apple itself uses Embedded Swift for a critical component: the Secure Enclave Processor, an isolated subsystem dedicated to keeping sensitive data secure. That’s not a demo project. It’s production firmware running inside every recent Apple device.

In just 788 bytes of resulting compiled code, Embedded Swift powers Conway’s Game of Life on the Playdate handheld, combining high-level abstractions with low-level bit manipulation for real-time animation. The binary size story is legitimate — you get generics, closures, optionals, and error handling in kilobytes, with full C and C++ interoperability for existing SDK integration. For a developer who already knows Swift and wants to write firmware without learning C, that’s a meaningful on-ramp.

The space is still experimental — you need development snapshots of the toolchain, not stable releases — but the architectural commitment from Apple is clear. They’re not going to let the Secure Enclave run on an abandoned feature.


Reputation and destiny

Swift is in an awkward place that has nothing to do with the language itself. It’s too associated with Apple to be the obvious choice for backend or systems work, and too associated with mobile development to be taken seriously in circles where Go, Rust, and C# get evaluated on their merits. The language makes good decisions, the ergonomics are solid, the ecosystem outside iOS/macOS is genuinely growing — and none of it seems to matter much to how developers outside the Apple world categorize it.

There’s something almost structural about this. A language’s reputation calcifies early and resists correction. Go got “simple and fast for services.” Rust got “safe systems programming.” Swift got “iPhone apps.” The actual capabilities expand, the ecosystem matures, companies ship production firmware in it and run it in Lambda — and the mental model doesn’t update at the same pace. It’s not that people looked at Swift and rejected it. It’s that they never looked. The language is doing more than most people think it is.

It’s just not doing it loudly.

01935a55-ad2d-7157-b499-9dc5f9b37f88