Cloud Services and Internet of Things
1 Overview
Introduction and course organization
1.1 Objective
Connect Internet of Things (IoT) enabled devices using scalable cloud services in a project setup.
What you will build
Devices that communicate with cloud services
Devices
- Each device consists of a hardware and a software part
- Usually, real hardware executes software that is loaded onto the hardware
- To simplify development, you can use a script
Cloud Services
- We use real cloud services
- Currently, we have education partnerships with: Google Cloud Computing
- Unfortunately GCP discontinued their IoT Core service :(
1.2 Project Ideas
You should come up with your own project idea, but here is some inspiration:
- Temperature/Humidity Monitoring System
- Air Pollution Monitoring System
- Noise Level Monitoring System
- Park Spot Monitoring System
- Health Monitoring System
- Traffic Management System
- ...
You don't have to solve complex problems and can also create something funny
1.3 Constraints
- Start from scratch, don't use presets or generators
- Keep project dependencies to a minimum
- Keep it small and simple
- Don't forget tooling and versioning
Consider: The idea of your project does not matter so much, we want to learn workflow and technology here
1.4 Prerequisites
- No formal prerequisites
- It's technical, you will have to code
- Proficiency in a language of your choice
You should have heard of
HTTP, TCP/IP, UDP, SSH, WebSockets, JSON, Git, Docker, API, Continuous Everything, Pull/Merge Request
1.5 Links
Description | Link |
---|---|
Gitlab Repositories Team Projects, Source Code, Issues, ... |
https://gitlab.mi.hdm-stuttgart.de/csiot/ss24 |
Supplementary Code Repositories Code examples, emulator, ... |
https://gitlab.mi.hdm-stuttgart.de/csiot/supplementary |
Hybrid?
The course takes place in presence.
If you want to participate remotely (e.g. due to covid infection, quarantine or other legitimate reasons), write us an email early enough. We'll ensure to bring the Meeting Owl and start the BBB-Stream. It's not a high-quality hybrid setup though, just sound and a shared screen.
1.6 Schedule
Date | Session (14:15 - 17:30) | Description |
---|---|---|
19.03.2024 | Kickoff | Course overview, questions and answers |
26.03.2024 | Lecture + Idea Pitch | Pitch your project idea (if you have any). Do you already have a team? |
02.04.2024 | Lecture + Team Setup | Maximum of 6 teams (4-6 members) and 30 students in total Higher semesters take precedence over lower semesters |
09.04.2024 | Lecture | |
16.04.2024 | Lecture | |
30.04.2024 | Lecture | |
07.05.2024 | Working Session + Q&A | Questions regarding presentations at the beginning of the session |
14.05.2024 | Midterm presentations | |
28.05.2024 | Working Session | |
04.06.2024 | Working Session | |
11.06.2024 | Working Session | |
18.06.2024 | Working Session | |
25.06.2024 | Working Session + Q&A | Questions regarding presentations at the beginning of the session |
02.07.2024 | Final presentations | |
07.07.2024 (Sunday) | Project submission | Commits after the submission date will not be taken into account Make sure to check the submission guidlines |
Lecture Sessions
- We'll talk through this slides in presentations
- Cloud Computing and Services
- Internet of Things
- Hands On / Assignments
- Call for teams and projects
Working Session
- Working sessions start with the team meetings
- Every team has a fixed 15 minutes slot
- Prepare the meeting (what have you done, what are you planning, specific questions to problems, ...)
- You can use the remaining lecture time to work on the applications
- The remaining time after the team meetings:
- We can discuss general questions together
- We can help with specific problems of your projects
Team time slots
Group | Time |
---|---|
XXX | 14:15 - 14:30 |
XXX | 14:30 - 14:45 |
XXX | 14:45 - 15:00 |
XXX | 15:00 - 15:15 |
XXX | 15:15 - 15:30 |
XXX | 15:30 - 15:45 |
Presentation Sessions
- Each member has to present equally
- Unexcused absence will make you fail the course
- Duration: 12 min per team
- Midterm presentations
- Introduce yourself (who are you, what are you studying, what's your background)
- Each team presents its current state
- Check the grading slide for general presentation rules
- Content: project idea, technology-stack, (maybe schedule, architecture, docker & code)
- Final presentations
- Each team presents its final state
- Check the grading slide for important things to present
- Open and present live in browser
- Explain architecture, devices, data flow, lessons learned
1.7 Project Submission
Grading is based on the Gitlab repo in the csiot group https://gitlab.mi.hdm-stuttgart.de/groups/csiot
Add a README.md
in the repo that contains:
- project name
- members (full name, student short, matriculation number)
- project abstract
- technical documentation
1.8 Grading
The total of 50 Points is split into 4 categories.
Following general best practices is required for each category.
Although the grades are derived from the team, each individual gets a distinct grade that can differ from the other team members.
Code & Architecture (20 Points)
- Naming is consistent and fits best practices of language
- Code comments make sense and help to understand the flow
- File/Folder/Package structure is clean and makes sense
- Most work is done using cloud services, e.g. AWS IoT Events, DynamoDB, ...
- At least 2 sensors are used
- At least 2 actuators are used
- Application contains events that depend on data from multiple devices
- Application is horizontally scalable (minimum configuration to add or remove devices/things)
Tooling (15 Points)
- Deployed code, e.g. lambdas, is tested
- Infrastructure setup is automated, e.g. with terraform
- Infrastructure can easily be setup by lecturers
- If applicable:
- Devices and scripts are encapsulated in a docker container
- Devices can easily be started, e.g. with docker-compose stack
- Application has no local dependencies but Docker (everything is dockerized)
Presentation (10 Points)
- Each team member has equal presentation time and content
- Each team member knows about all areas of the project
- Group highlights lessons learned
- Presentation is finished in the alloted time
- Presentation is well prepared and works
Technical documentation (5 Points)
- Contains a very short project abstract
- Contains setup instructions
- Contains an architecture diagram
- Contains a data flow diagram
1.9 Ask for help!
In general:
The Center for Learning and Development, Central study guidance, VS aka student government support you:
- Exam nerves, fear of failure, financial problems, stress, depression, ...
- Bullying, racism, sexism, discrimination, ...
- Tipps and feedback regarding scientific writing (e.g. bachelor thesis)
- Career options after the bachelor
- Support for decision-making
Regarding this course:
- Don't be afraid to ask questions about your project (that won't affect your grading negatively)
- Talk to us early if there are any problems within the group (someone never shows up or does not support the group)
1.10 Questions
- Do you know what you will build?
- Do you know how it is graded?
- Do you know what presentation, lecture, working, Q&A sessions etc. are?
- Anything else?
2 Cloud Computing and Services
Introduction
2.1 What is Cloud?
What do you think?
2.2 Definition
Cloud computing is the on-demand availability of computer system resources, without direct active management by the user.
2.3 Timeline
- 1960s-90s: Initial concepts by Compaq, AT&T, IBM, DEC, ...
- 2002: Amazon creates Amazon Webservices (AWS) and the Elastic Compute Cloud (EC2 2006)
- 2008: Google creates App Engine
- 2008: NASA creates OpenNebula (EU funded)
- 2010: Microsoft creates Azure Cloud
- 2010: NASA and Rackspace create OpenStack based on OpenNebula
- 2012: Google creates Compute Engine
- 2015: Cloud Native Computing Foundation CNCF Landscape
2.4 Service Models
Service | Description | Examples |
---|---|---|
Infrastructure as a service (IaaS) | High-level API for physical computing resources | Virtual machines, block storage, objects storage, load balancers, networks, ... |
Platform as a service (PaaS) | High-level application hosting with configuration | Databases, web servers, execution runtimes, development tools, ... |
Software as a service (SaaS) | Hosted applications without configuration options | Email, file storage, games, project management, ... |
Function as a service (FaaS) | High-level function hosting and execution | Image resizing, event based programming, ... |
What is Google Photos? iCloud? GitHub? Dropbox? GMail? EC2? Dynamo DB? Google Firebase? Lambda? Google App Engine? Hosted Kubernetes?
3 Cloud Applications
Introduction
3.1 What is a Cloud Application?
What do you think?
3.2 Definition
A software with an architecture that leverages a multitude of cloud services.
3.3 VideoApp
An example app and web platform that allows friends from all over the world to collaboratively create a movie from their holidays
3.4 VideoApp Features
- Users can sign up to the platform using an eMail or a third party provider
- Users can create holiday groups and invite friends
- Friends can upload raw footage into holiday groups and tag it
- Friends can edit the footage into a movie using an online editor
3.5 VideoApp Requirements
Feature | Technical elements |
---|---|
Users can sign up to the platform using an eMail or a third party provider | Email, OAuth2 provider, relational data storage, ... |
User can create holiday groups and invite friends | Relational data storage, caching, notification, ... |
Friends can upload raw footage into holiday groups and tag it | Relational data storage, object storage, transcoding, queueing, search index, caching, notification, ... |
Friends can edit the footage into a movie using an online editor | Object storage, transcoding, queueing, caching, notification, ... |
For development and operations: | System monitoring and alerting, distributed logging, automated integration and deployment, global content distribution network, virtual network, system environments (development, staging, production, ...) |
3.6 VideoApp Architecture
Feature: Friends can upload raw footage into holiday groups and tag it
4 Cloud Infrastructure
Introduction
4.1 Technical View
Conventional Infrastructure | Cloud Infrastructure |
---|---|
(Bare-metal) Servers, Type 1/2 Hypervisors, Containers | Cloud Resources |
Long-living assets | Short resource life span |
Own data center, Colocation, Rented dedicated servers | No own hardware |
Direct physical access | No access on hardware |
4.2 Organizational View
4.3 Challenges
Short-living resources
Deployment, configuration, maintenance and teardown has to be automated
DevOps
Developers need to understand the runtime environment
Operators need to understand some application layers
New components in the application stack
Service discovery, service configuration, authentification/authorization and monitoring
4.4 Advantages
Continuous everything
Integration, deployment, delivery
High availability
Scalability, reliability, geo replication, disaster recovery
5 Cloud Tooling
Introduction
5.1 Infrastructure as Code
A coded representation for infrastructure preparation, allocation and configuration
5.2 Resource Preparation
Example: Custom OS image with pre-installed Docker on Hetzner Cloud using Packer
{
"builders": [
{
"type": "hcloud",
"token": "xxx",
"image": "debian-10",
"location": "nbg1",
"server_type": "cx11",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get update",
"apt-get upgrade -y",
"curl -fsSL https://get.docker.com | sh"
]
}
]
}
5.3 Resource Allocation
Example: Creating a S3 Bucket on AWS using Terraform
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "eu-central-1"
}
resource "aws_s3_bucket" "terraform-example" {
bucket = "aws-s3-terraform-example"
acl = "private"
}
5.4 Resource Configuration
Example: Installing Docker and initializing a Swarm using an Ansible Playbook
- hosts: swarm-master
roles:
- geerlingguy.docker
tasks:
- pip:
name: docker
- docker_swarm:
state: present
5.5 Other famous tools
#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUUk8EEAnnkhXlukKoUPND/RRClWz2s5TCzIkd3Ou5+Cyz71X0XmazM3l5WgeErvtIwQMyT1KjNoMhoJMrJnWqQPOt5Q8zWd9qG7PBl9+eiH5qV7NZ mykey@host
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEWC9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHbiRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJctz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies
import pulumi
from pulumi_google_native.storage import v1 as storage
config = pulumi.Config()
project = config.require('project')
# Create a Google Cloud resource (Storage Bucket)
bucket_name = "pulumi-goog-native-bucket-py-01"
bucket = storage.Bucket('my-bucket', name=bucket_name, bucket=bucket_name, project=project)
# Export the bucket self-link
pulumi.export('bucket', bucket.self_link)
5.6 Continuous Everything
Continuous | Requires | Offers | Implementation |
---|---|---|---|
Integration | Devs need correct mindset Established workflows |
Avoids divergence Ensures integrity/runability |
Shared codebase Integration testing |
Deployment | Automated deployment Access control / Permission management |
Ensures deployability | Infrastructure as Code Docker Swarm, Kubernetes, Nomad, ... |
Delivery | Deployability Approval from Marketing, Sales, Customer care |
Rapid feature release cycles Small to no difference between environments |
Same as for continuous Deployment Release/Feature management |
5.7 Monitoring and Alerting
Proper Monitoring/Alerting is essential when CD is applied
AWS CloudWatch
Service for time series data, logs and dashboards
Grafana
Prometheus: Time series database, metric exporters, Alertmanager
Grafana: (Real-time) Dashboards for monitoring data, Alerting Engine
5.8 Backup and Restore
Backup
- Automatic backup of stateful components
- Backup location preferably on external system (e.g. AWS S3)
Restore
- Restore process needs to be defined and tested
- Important for disaster recovery, useful for migration tasks
6 Internet of Things
Introduction
6.1 What is the Internet of Things?
What do you think?
6.2 Definition
A system of interrelated computing devices that can transfer data over a network without human interaction
6.3 Architecture
6.4 Hardware
Type | CPU (Max) | RAM | OS | TCP/IP | GPIO | ||
---|---|---|---|---|---|---|---|
DHT22 | Sensor | - | - | - | ❌ | ❌ | |
Arduino (ATmega328P) | MCU | 20 MHz 8-bit RISC | 2 KiB SRAM | - | ❌ | ✅ | |
ESP32 (Xtensa LX6) | SoC | 2 * 240 MHz 32-bit RISC | 520 KiB SRAM | e.g. FreeRTOS | ✅ | ✅ | |
Raspberry Pi 4 (ARM Cortex-A72) | SoC | 4 * 1.5 GHz 64-bit ARM | 4 GiB DDR4 | GNU/Linux | ✅ | ✅ | |
Random Gaming-PC | PC | 8 (HT) * 5.0 GHz 64-bit x86 | 32 GiB DDR4 | e.g. GNU/Linux | ✅ | ❌ |
6.5 Protocols
The following protocols are often used in an Internet of Things stack
Name | Network Layer | Description |
---|---|---|
LoRa(WAN) | Layer 1/2 | Low power, long range, uses license-free radio frequencies |
ZigBee | Layer 1/2 | Low power, 2.4 Ghz, 64 bit device identifier |
6LoWPAN | Layer 1/2 | Low power, 2.4 Ghz/ license-free radio frequencies, IPv6 addressing |
Ethernet | Layer 1/2 | Frame based protocol, also used for the normal internet |
802.11 Wi-Fi | Layer 1/2 | Wireless local area network protocol, also used for the normal internet |
IPv4 and IPv6 | Layer 3 | Packet based protocol, also used for the normal internet |
Bluetooth LE | Layer 3 | Low energy, wireless personal area network protocol, different from normal bluetooth |
MQTT | Layer 7 | Lightweight, Message Queuing Telemetry Transport protocol, publish-subscribe model |
7 Hands On
Live demos and practical exercises to learn some new technologies
7.1 Backstory of the assignments
In the year 3000, space travel had become commonplace, and as humanity spread out across the galaxy, the demand for necessary goods on distant planets skyrocketed. The Evil Mining Corporation saw an opportunity and quickly established itself as the primary supplier of four crucial items: hydrogen, oxygen, WD40, and duct tape.
At first, the corporation's methods were praised for their efficiency. They implemented a sophisticated Internet of Things architecture that relied on MQTT to coordinate information across their network of mining sites and supply ships using the interplanetory GalaxyNet. The system allowed them to accurately track their inventory and ensure that the four essential items were always in stock and ready for shipment.
But as time passed, the corporation's true motives became clear. They were more interested in profits than the well-being of the planets they supplied. They began cutting corners and taking shortcuts that put entire populations at risk.
One day, a shipment of hydrogen destined for a small farming colony never arrived. The colony was left without power, and their crops began to wither and die. When they contacted the Evil Mining Corporation, they were met with silence. It was only then that they realized just how much power the corporation had over their lives.
Desperate, the farmers banded together and set out to uncover the truth behind the corporation's operations. What they found was shocking. The corporation had been exploiting the resources of every planet they supplied, leaving behind nothing but ecological devastation and poverty.
Enraged, the farmers decided to take matters into their own hands. They gathered all the WD40 and duct tape they could find and launched a coordinated attack on the corporation's headquarters. The battle was long and hard-fought, but in the end, the farmers emerged victorious.
With the Evil Mining Corporation defeated, the planets were finally able to rebuild and thrive. They established new supply chains that were transparent and fair, ensuring that the four essential items were always available to those who needed them. And they vowed to never again let greed and corruption take hold of their society.
7.2 MQTT
Lightweight, publish-subscribe network protocol that transports messages between devices
Open OASIS and ISO standard (ISO/IEC PRF 20922)
- A client sends a message to a topic, e.g. /sensors/temperature/garage
- A client can subscribe to multiple topics, e.g. /sensors/temperature/+
Mosquitto is a popular lightweight server (broker)
Assignment 1: MQTT
Create a client, that publishes data to a channel. Create a client, that receives data from that channel. Both communicate by mqtt.
Broker
- IP: 88.198.150.10
- PORT: 8883
Code snippets
The mqtt server is TLS-secured. You'll find the certificates in the assignments repository. Here is an example, how to use them with paho and python:
import ssl
import paho.mqtt.client as mqtt
from random import randint
import json
# TLS
client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2)
client.tls_set(ca_certs="certificates/mosquitto.crt", certfile="certificates/auth.mosquitto.crt", keyfile="certificates/auth.mosquitto.key", cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLS, ciphers=None)
client.tls_insecure_set(True)
# Random
value = randint(0, 1000)
# JSON
data = { "key": value }
payload = json.dumps(data)
Tasks
- a) Sender
- Send a hello-world message containing your station name to the subchannel
csiot/lobby
every 10 seconds
- Send a hello-world message containing your station name to the subchannel
- b) Receiver
- Subscribe to all subchannels of
csiot
withcsiot/+
- Print all messages to the console
- Subscribe to all subchannels of
- c) Sender
- Send random data (hydrogen, oxygen, wd40, gaffer) every 10 seconds as a stringified JSON
- Use a subchannel of
csiot/
containing your station name, e.g.csiot/mos-eisley
for this data
7.3 Docker
docker cli
List running containers
docker ps
Run a new container with alpine image
docker run -ti alpine sh
docker run -ti alpine:3.13 sh
Run a new container with nginx image
docker run -p 3333:80 nginx
Map a folder as volume
docker run -p 3333:80 -v $PWD/website:/usr/share/nginx/html nginx
Run a new container with node app
docker run -p 3333:3000 -v $PWD/app:/app node:alpine node /app/server.js
Dockerfile
Build custom image with node app
Dockerfile
FROM node:15-alpine
WORKDIR /app
COPY server.js .
CMD node server.js
Build using Dockerfile and run
docker build -t csiot-app app
docker run -p 3333:3000 csiot-app
Build docker compose stack with 3 services
Docker compose
docker-compose.yml
services:
api:
build: ./app
ports:
- "3333:3000"
proxy:
image: nginx:1-alpine
redis:
image: redis:6-alpine
Run full compose stack
docker compose up
Run single container from compose stack
docker compose run api sh
Assignment 2: Docker
Use docker-compose to start three station clients and one receiver. Copy your existing solution from Assignment 1
and extend it.
Code snippets
import os
my_string = os.environ["STRING_VARIABLE"]
my_integer = int(os.environ["INTEGER_VARIABLE"])
Tasks
- a) Docker
- Separate client and sender into two directories with a custom Dockerfile each
- Sender & Client: Use IP and PORT from the environment variables
- Sender: Get station name from the environment variable
- b) Compose
- Use volumes to map the certificates into the containers
- Use depends_on to start the headquarter (receiver) first
7.4 Emulator
We use an emulator to simulate hardware devices.
Check out https://gitlab.mi.hdm-stuttgart.de/csiot/supplementary/emulator.
The README explains how to use the emulator.
Assignment 3: Emulator
Use the emulator to simulate the state of supplies of a station. Modify sender to fetch emulator state and send it to the broker.
Code snippets
Simple example how to fetch data from a REST-API:
import urllib.request
def get(url):
try:
req = urllib.request.Request(url)
with urllib.request.urlopen(req) as f:
if f.status == 200:
return json.loads(f.read().decode('utf-8'))
except ConnectionRefusedError:
print(f'could not connect')
except:
print(f'unknown error')
return None
result = fetch('localhost:3333/path/to/route')
print(result)
Tasks
Use your previous assignment as a basis.
- a) Emulator
- Create a config.yaml to display 4 sensors for the station supplies
- b) Docker-compose
- Use only one station (remove the other services)
- Add an emulator service:
- Map config.yaml through volumes
- Expose a port
- c) Test your setup
- Run compose, open emulator in a browser, test the REST-API (Postman, VSCode-RestClient, Curl, ...)
- d) Sender
- Refactor sender code to fetch data from the emulators api
7.5 Assigment 4: Terraform TLS-CA
TLS based client-authentication with own CA
Build and maintain your own custom Certificate authority for client-authentication using Terraform.
Reguirements:
- Terraform: https://www.terraform.io/downloads
Tasks
- a) Discuss: What is the role of a CA? Why do we need it in the context of IoT device authentication?
- b) Generate/Apply CA/CSR's/Certificates using the provided solution (adjust for your needs): https://gitlab.mi.hdm-stuttgart.de/csiot/supplementary/handson/-/tree/master/solutions/04-terraform-tls-ca
- c) Use
openssl
to inspect the generated certificates, e.g.openssl x509 -in a-cert.pem -noout -text
- Who signed this certificate? For what can this certificate be used?
- d) Use GitLab to store Terraform's state to work as a team on the same CA.