IoT testing pyramid: unit tests (fast, many) — integration — hardware-in-loop — field validation (slow, few)
IoT testing is harder than software testing because you cannot mock physics. Radio interference, battery drain, mechanical stress, and temperature cycling cause failures that unit tests never catch.
void test_service_call_queues_on_button_press(void) {
event_t evt = {.type = BUTTON_PRESS};
state_machine_process(&evt);
TEST_ASSERT_EQUAL(STATE_SERVICE_CALL_PENDING, get_current_state());
TEST_ASSERT_TRUE(ble_queue_has_pending_event());
}
HIL connects a real device to a PC-controlled test harness that simulates hardware inputs and validates outputs. Catches timing bugs and interrupt handlers that unit tests miss.
Wireless testing is one of the most frequently skipped — and most frequently regretted — aspects of IoT hardware testing. A device that works perfectly in the lab next to the test equipment may fail in the field due to antenna detuning from nearby metal, adjacent-channel interference from co-located devices, or simply inadequate transmit power for the required range.
RF characterisation tests should include: transmit power measurement at all modulation rates, receiver sensitivity measurement (minimum signal level for reliable reception), adjacent channel rejection (how does the radio perform when another device transmits 5MHz away?), and co-existence testing (BLE and WiFi simultaneously, if both are present). These tests require a conducted RF measurement setup or a shielded anechoic environment — improvised free-space measurements are not reproducible enough for production qualification.
Accelerated life testing compresses months or years of operational stress into days or weeks by increasing stress levels beyond normal operating conditions. Temperature cycling (thermal fatigue), humidity (corrosion of PCB traces and connector contacts), vibration (solder joint fatigue), and UV exposure (enclosure material degradation) are common ALT stressors.
ALT results predict field failure rates using statistical models (Arrhenius for thermal, Coffin-Manson for thermal cycling). Knowing your device’s MTTF (Mean Time To Failure) before launch lets you design appropriate warranty terms, provision spare parts inventory, and price your product to cover expected warranty costs. Products launched without ALT data routinely face warranty cost surprises that erode margins in years 2–3.
Every firmware change should trigger automated tests before merging. For IoT firmware, this means at minimum: unit tests on host (mock hardware), integration tests on a hardware-in-loop rig (real MCU, simulated peripherals), and a smoke test on actual hardware in a lab environment. Pull requests that break any of these gates do not merge, regardless of urgency.
Building this CI/CD pipeline requires investment — hardware test rigs, automated provisioning of test devices, and robust test infrastructure. The investment pays for itself within months on any product with regular firmware updates. Manual regression testing at release frequency is not scalable and introduces human error at the worst possible moment.
FSS is a full-stack IoT engineering team — hardware, firmware, cloud, and mobile in one place.
FSS Technology designs and builds IoT products from silicon to cloud — embedded firmware, custom hardware, and Azure backends.
Talk to our team →