Express Generator and socket.io

Recently we went about adding socket.io to a site scaffolding created with Express application generator. Socket.io’s documentation is pretty good, but doesn’t “just work” with the Express generator setup.

While in the end it’s a trivial fix, a quick google didn’t provide much for answers so I figured I’d put together a simple guide on Express and Socket.io using the command line Express generator.

Express has been my go to node framework for some time, it’s replaced Sinatra as my un-opinionated web framework of choice. Express is extremely un-opiniontated out of the box so in my default setup I add some bells and whistles such as automagic controller setup, a standard app dir for controllers, routes, models, and views and a few more things to tweak it to my workflow.

I’m not gonna talk about those today though. For sake of keeping it clean, and leaving you with a boilerplate you can fit in your own world I’m gonna stick as close to out of the box as I can. So without further ado let’s make some stuff.

Prerequisites.

You’re gonna need nodejs installed. If you haven’t already done that head on over to the nodejs website and install for your system.

This tutorial assumes basic knowledge of node and the command line, but I’m gonna try and be as verbose as possible so non techie types can give it a go.

Also, I have us test the example using curl. It just feels cooler, more like someone else is triggering the socket. If you don’t have curl installed you can just visit the url in a separate tab in the browser.

Generating your express project

Cool, now that that’s out of the way we’re gonna set up the basics.

1
$ sudo npm install express-generator -g

O.K. I’m assuming you’re familiar with NPM at this point. But just in case you aren’t NPM is node’s default package manger. The command above does a couple of things. it uses npm to install express-generator it’s using sudo since the -g option is telling it to install “globally”.

Now let’s use express generator to set up our app.

1
$ express myApp

This sets up a barebones express application in the folder myApp. Let’s go there now.

1
$ cd myApp

Now let’s add socket.io to our app. Note the –save flag to save it to our package.json file as a dependancy.

1
$ npm install socket.io --save

Now that we’ve added socket.io to our package.json let’s install all the default dependencies too.

1
$ npm install

And for kicks let’s launch our application.

1
$ node bin/www

If all went well you should have an express app running at http://localhost:3000 check that out in your browser.

Let’s kill the server so we can set up socket.io (If you don’t know how to do this try hitting Ctrl+c at the command prompt).

Adding the websocket server

Now for the fun part. open app.js in your editor of choice. Lines 1-11 should look like this:

1
2
3
4
5
6
7
8
9
10
11
var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

var routes = require('./routes/index');
var users = require('./routes/users');

var app = express();

Pretty straightforward stuff here. Require all our necessary modules. Add a route for index and users and create the express app.

Let’s go ahead and add socket.io to the app.

We’re going to add the following lines below our app variable:

1
2
var server = require('http').Server(app);
var io = require('socket.io')(server);

The first line creates our apps http server. This is normally done in bin/www but we’re going to move it here. The second sets up our websockets server to run in the same app. When you’re lines 1-13 should look like so.

1
2
3
4
5
6
7
8
9
10
11
12
13
var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

var routes = require('./routes/index');
var users = require('./routes/users');

var app = express();
var server = require('http').Server(app);
var io = require('socket.io')(server);

Cool, now we have to pass our server into bin/www so let’s move on down to the bottom of the file. It should look like this:

1
2
3
4
5
6
7
8
9
10
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render('error', {
message: err.message,
error: {}
});
});


module.exports = app;

See that last line where we’re exporting the app? well we want to export our server there as well since we’re declaring it on line #12 instead of bin/www. That will make your last line look like this:

1
module.exports = {app: app, server: server};

Next, open bin/www in your editor.

Lines 1-22 should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/usr/bin/env node

/**
* Module dependencies.
*/

var app = require('../app');
var debug = require('debug')('myApp:server');
var http = require('http');

/**
* Get port from environment and store in Express.
*/

var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

/**
* Create HTTP server.
*/

var server = http.createServer(app);

Notice two things are wrong here. First we’re no longer just returning our express app but an object containing app and server. So let’s change line #7 from:

1
var app = require('../app');

To:

1
var app = require('../app').app;

Second, we’re declaring a second server on line 22:

1
var server = http.createServer(app);

Let’s change that to require the instance we created in app.js that now contains our socket.io server as well.

1
var server = require('../app').server;

That should do it. Lines 1-22 should now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/usr/bin/env node

/**
* Module dependencies.
*/

var app = require('../app').app;
var debug = require('debug')('www:server');
var http = require('http');

/**
* Get port from environment and store in Express.
*/

var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);

/**
* Create HTTP server.
*/

var server = require('../app').server;

Cool, let’s fire up our server and make sure everything still works. If it does you should see the same express message you saw at http://localhost:3000 previously.

Now that we have our pieces in place, let’s wire up a simple socket.

We’ll start by passing our socket to our response in middleware. Open app.js back up and start a new line on line 21 and add the following. This simply adds socket.io to res in our event loop.

1
2
3
4
app.use(function(req, res, next){
res.io = io;
next();
});

Still nothing to see though. Let’s add something fun. Save that and open up routes/users.js. It should look like this:

1
2
3
4
5
6
7
8
9
var express = require('express');
var router = express.Router();

/* GET users listing. */
router.get('/', function(req, res, next) {
res.send('respond with a resource.');
});

module.exports = router;

Remember how we added socket.io to our response object a minute ago? We’ll now we can use it to respond to routed information via a websocket. Keen.

1
2
3
4
5
6
7
8
9
10
var express = require('express');
var router = express.Router();

/* GET users listing. */
router.get('/', function(req, res, next) {
res.io.emit("socketToMe", "users");
res.send('respond with a resource.');
});

module.exports = router;

Now let’s add a socket listener to our layout. Here’s what it should look like now:

1
2
3
4
5
extends layout

block content
h1= title
p Welcome to #{title}

Let’s add the following JS:

1
2
3
4
5
6
7
8
9
10
11
extends layout

block content
h1= title
p Welcome to #{title}
script(src="/socket.io/socket.io.js")
script.
var socket = io('//localhost:3000');
socket.on('socketToMe', function (data) {
console.log(data);
});

Great. I think that’s it, let’s restart the server and see if this works.

First open up your homepage at http://localhost:3000. Open up dev tools and watch your JS console.

Now open a terminal and type:

1
$ curl http://localhost:3000/users

If all went well you should see your dev tools console output “users”.

That’s it, you’re ready to start building cool socket enabled stuff. Pretty easy right?

MongoDB connection with Node.JS and Express

Incorrect implementation

Let’s first take a look at an example of an incorrect implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
const express = require('express');
const app = express();
const MongoClient = require('mongodb').MongoClient;
const ObjectId = require('mongodb').ObjectId;

const port = 3000;

const mongo_uri = 'mongodb://localhost:32768';

app.get('/', (req, res) => {
MongoClient.connect(mongo_uri, { useNewUrlParser: true })
.then(client => {
const db = client.db('my-db');
const collection = db.collection('my-collection');
collection.find({}).toArray().then(response => res.status(200).json(response)).catch(error => console.error(error));
});
});

app.get('/:id', (req, res) => {
const id = new ObjectId(req.params.id);
MongoClient.connect(mongo_uri, { useNewUrlParser: true })
.then(client => {
const db = client.db('my-db');
const collection = db.collection('my-collection');
collection.findOne({ _id: id }).then(response => res.status(200).json(response)).catch(error => console.error(error));
});
});

app.listen(port, () => console.info(`REST API running on port ${port}`));

In the code above, we create a bunch of initial variables and define two API endpoints. Each endpoint connects to the database and executes a simple query.

The issue with this approach is that each time we call an endpoint a connection is going to be created and that’s not the best thing to do as it will not scale very well.

We can verify that a new connection is made by starting up the API and making a request against both of the endpoints and take a look at the number of active connections in MongoDB (via its CLI):

1
2
> db.serverStatus().connections;
{ "current" : 3, "available" : 838857, "totalCreated" : 37 }

We have 3 connections - 2 for the API and 1 for the CLI itself that connects to the database. This is not a right approach since ideally, we’d like to share one connection throughout the entire application.

Sharing the connection

There are a bunch of approaches that we can follow, and we’ll discuss one that seems to be a really interesting one. We’ll base our application on the fact that the API should not be available if the database is not available that powers it. This makes sense - there’s no point in providing any endpoints if the database is down and we can’t effectively display data.

To achieve this, we need to re-think our logic around connecting to the database a little bit - first, we should attempt to make the connection, and if that is successful, we can fire up the API server as well.

1
2
3
4
5
6
MongoClient.connect(mongo_uri, { useNewUrlParser: true })
.then(client => {
const db = client.db('my-db');
const collection = db.collection('my-collection');
app.listen(port, () => console.info(`REST API running on port ${port}`));
}).catch(error => console.error(error));

So far so good but now the question is, how do we make the connection available. The thing is, we don’t need to. It’s enough if we provide the routes with the collection information.

Express has a great feature to share data between routes (especially useful with modular code). This is a simple object app.locals, and we can attach properties to it and access it later on inside the route definitions:

1
2
// add this line before app.listen()
app.locals.collection = collection;

Once we have the collection information we can rework our routes as well:

1
2
3
4
5
6
7
8
9
10
app.get('/', (req, res) => {
const collection = req.app.locals.collection;
collection.find({}).toArray().then(response => res.status(200).json(response)).catch(error => console.error(error));
});

app.get('/:id', (req, res) => {
const collection = req.app.locals.collection;
const id = new ObjectId(req.params.id);
collection.findOne({ _id: id }).then(response => res.status(200).json(response)).catch(error => console.error(error));
});

Moreover, if we now check the database connection count we’ll see only 2 active connections:

1
2
> db.serverStatus().connections;
{ "current" : 2, "available" : 838858, "totalCreated" : 39 }

One for our API and one for the CLI. This connection count will never increase since we connect to the database once when we start up the API itself, and that’s it.

Connection Pooling

Now, of course, there are situations where the API not only returns information from a database but some other piece of data as well - in this case, we may not necessarily want to depend on the fact whether the database is up or down - only the endpoints where a database query is made should fail.

Please note that by default the MongoDB Node.js API has a connection pool of 5 connections.

Let’s take a look at an example where we will extend the connection pool as well as make sure that our API is capable of returning static data should the database be down.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// code excerpt
let db;
let collection;

MongoClient
.connect(mongo_uri, { useNewUrlParser: true, poolSize: 10 })
.then(client => {
db = client.db('my-db');
collection = db.collection('my-collection');
})
.catch(error => console.error(error));

app.get('/static', (req, res) => {
res.status(200).json('Some static data')
});

app.get('/', (req, res) => {
collection.find({}).toArray().then(response => res.status(200).json(response)).catch(error => console.error(error));
});

Again, checking the connection count should display 2 connections.

Database disconnect

It is recommended not to close database connections via Node.js. There are multiple reasons behind this thought. First of all, if we need to use different databases, we can use the a db() method to select another database and still use the pool of connections. This pool is there to make sure that a blocking operation cannot freeze the Node.js process/application.

Of course, it’d make sense to disconnect from the database if the application closes. We can achieve this using the following few lines of code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// create an initial variable
let dbClient;
// assign the client from MongoClient
MongoClient
.connect(mongo_uri, { useNewUrlParser: true, poolSize: 10 })
.then(client => {
db = client.db('my-db');
dbClient = client;
collection = db.collection('my-collection');
})
.catch(error => console.error(error));
// listen for the signal interruption (ctrl-c)
process.on('SIGINT', () => {
dbClient.close();
process.exit();
});

###Conclusion

In this post, we have seen ways to handle MongoDB connections in a Node.js/Express application. Be careful to manage the connections to avoid slowing down the application via exhaustive memory consumption.

Power Thermometer

This is a little project I have been working on. It is a power thermostat. Essentially it reads the ambient temperature and switches a relay on or off according to that temperature. The brief is fairly simple but I hope to use it in practice initially to control the temperature in a cupboard where I am brewing my own beer.

The project is based around an Atmel ATTiny85 micocontroller. I chose this because it is small and more than capable. Additionally I had a few spare lying around. The ATTiny reads the temperature from a onewire Dallas DS18B20 temperature sensor. (See my earlier post on interfacing one of these with an arduino.)

There are two switches for updating the temperature set level. One to adjust the temparature up and the other to adjust the temperature down. These will be in 1 degree steps but there is no reason why these steps cannot be programmed into the ATTiny at greater or smaller intervals.

A small OLED screen has been added. This will provide user feedback and display the set temperature and the current temperature.

For my requirements, when the ambient temperature reaches the set temperature it will switch off the relay thus providing no more heat. As the ambient temperature then cools down once it drops below the set temperature the relay will switch back on again. To avoid continual switching on and off I will have a 1 degree margin either side of my set temperature.

For troubleshooting I have also included an icsp header. This will allow me to program and re-program the ATTiny85 without the need to keep removing it from the board.

I intend to package all up in one small unit that includes the power supply, low level circuitry, relay and screen. With a simple in and out mains power connector so that it can be plugged in line to a domestic heater.

The schematic can be seen below.

Power Thermometer Schematic

I am by no means an expert in board layout. I am sure I have broken many rules in my layout below. Please do not take a leaf out of my book on this one. As you can see, the transformer is on the left. The recitfication and smoothing circuits are in the middle. The headers and connectors are at the top right and the ATTiny is bottom right. The pull up resistors are filling up spaces. I have added super thick traces for the mains 220v connection to the transformer. All low level voltage is over to the right of the board. There are a few surface mount components. I have never soldered surface mount before, but figured now is the time to have a go. I think I’ll be ok for the three components, i.e. 2 large caps and the bridge rectifier. Any suggestions on board layout improvements are more than welcome.

Power Thermometer Board Layout

The next thing to do was to find a board manufacturer. I looked for a long time to find a good priced one for hobbyists, and the best deal I could find was seeedstudio.com They are in China but are so well know that they even have there own export function in Eagle schematic, which is the program that I used to design the circuit. They are also very competitively priced. I could not find anywhhere outside China that comes close to competing with them. The ordering porcess was simple and it cost £10 for 10 boards including delivery. I chose red ones and added some additional screen printing. The ordering process was very straightforward as Eagle exported the required gerber files. These were then attached to the seeed order and payment was made. Within a week I recieved confirmation that the boards were completed, along with a photo. I am now awaiting for them to arrive.

Power Thermometer Completed Board

The boards arrived within 4 weeks. Which was also delayed by a week as the whole of China had literally shutdown at the start of the covid-19 outbreak. The quality is exactly what you should expect from a board manufacturer.

Make a python module installable with PIP install

Here is an absolute minimal example, showing the basic steps of preparing and uploading your package to PyPI using setuptools and twine.

This is by no means a substitute for reading at least the tutorial, there is much more to it than covered in this very basic example.

Creating the package itself is already covered by other answers here, so let us assume we have that step covered and our project structure like this:

1
2
3
4
.
└── hellostackoverflow/
├── __init__.py
└── hellostackoverflow.py

In order to use setuptools for packaging, we need to add a file setup.py, this goes into the root folder of our project:

1
2
3
4
5
.
├── setup.py
└── hellostackoverflow/
├── __init__.py
└── hellostackoverflow.py

At the minimum, we specify the metadata for our package, our setup.py would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
from setuptools import setup

setup(
name='hellostackoverflow',
version='0.0.1',
description='a pip-installable package example',
license='MIT',
packages=['hellostackoverflow'],
author='Benjamin Gerfelder',
author_email='benjamin.gerfelder@gmail.com',
keywords=['example'],
url='https://github.com/bgse/hellostackoverflow'
)

Since we have set license=’MIT’, we include a copy in our project as LICENCE.txt, alongside a readme file in reStructuredText as README.rst:

1
2
3
4
5
6
7
.
├── LICENCE.txt
├── README.rst
├── setup.py
└── hellostackoverflow/
├── __init__.py
└── hellostackoverflow.py

At this point, we are ready to go to start packaging using setuptools, if we do not have it already installed, we can install it with pip:

1
pip install setuptools

In order to do that and create a source distribution, at our project root folder we call our setup.py from the command line, specifying we want sdist:

1
python setup.py sdist

This will create our distribution package and egg-info, and result in a folder structure like this, with our package in dist:

1
2
3
4
5
6
7
8
9
.
├── dist/
├── hellostackoverflow.egg-info/
├── LICENCE.txt
├── README.rst
├── setup.py
└── hellostackoverflow/
├── __init__.py
└── hellostackoverflow.py

At this point, we have a package we can install using pip, so from our project root (assuming you have all the naming like in this example):

1
pip install ./dist/hellostackoverflow-0.0.1.tar.gz

If all goes well, we can now open a Python interpreter, I would say somewhere outside our project directory to avoid any confusion, and try to use our shiny new package:

1
2
3
4
5
6
Python 3.5.2 (default, Sep 14 2017, 22:51:06) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from hellostackoverflow import hellostackoverflow
>>> hellostackoverflow.greeting()
'Hello Stack Overflow!'

Now that we have confirmed the package installs and works, we can upload it to PyPI.

Since we do not want to pollute the live repository with our experiments, we create an account for the testing repository, and install twine for the upload process:

1
pip install twine

Now we’re almost there, with our account created we simply tell twine to upload our package, it will ask for our credentials and upload our package to the specified repository:

1
twine upload --repository-url https://test.pypi.org/legacy/ dist/*

We can now log into our account on the PyPI test repository and marvel at our freshly uploaded package for a while, and then grab it using pip:

1
pip install --index-url https://test.pypi.org/simple/ hellostackoverflow

As we can see, the basic process is not very complicated. As I said earlier, there is a lot more to it than covered here, so go ahead and read the tutorial for more in-depth explanation.

Working with python modules

Module: A module is a file containing Python definitions and statements. The file name is the module name with the suffix .py appended.

Module Example: Assume we have a single python script in the current directory, here I am calling it mymodule.py

The file mymodule.py contains the following code:

1
2
def myfunc():
print("Hello!")

If we run the python3 interpreter from the current directory, we can import and run the function myfunc in the following different ways (you would typically just choose one of the following):

1
2
3
4
5
6
7
8
9
>>> import mymodule
>>> mymodule.myfunc()
Hello!
>>> from mymodule import myfunc
>>> myfunc()
Hello!
>>> from mymodule import *
>>> myfunc()
Hello!

Now assume you have the need to put this module into its own dedicated folder to provide a module namespace, instead of just running it ad-hoc from the current working directory. This is where it is worth explaining the concept of a package.

Package: Packages are a way of structuring Pythons module namespace by using dotted module names. For example, the module name A.B designates a submodule named B in a package named A. Just like the use of modules saves the authors of different modules from having to worry about each others global variable names, the use of dotted module names saves the authors of multi-module packages like NumPy or the Python Imaging Library from having to worry about each others module names.

Package Example: Let’s now assume we have the following folder and files. Here, mymodule.py is identical to before, and __init__.py is an empty file:

1
2
3
4
.
mypackage
__init__.py
mymodule.py

The __init__.py files are required to make Python treat the directories as containing packages. For further information, please see the Modules documentation link provided later on.

Our current working directory is one level above the ordinary folder called mypackage

1
2
$ ls
mypackage

If we run the python3 interpreter now, we can import and run the module mymodule.py containing the required function myfunc in the following different ways (you would typically just choose one of the following):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
>>> import mypackage
>>> from mypackage import mymodule
>>> mymodule.myfunc()
Hello!
>>> import mypackage.mymodule
>>> mypackage.mymodule.myfunc()
Hello!
>>> from mypackage import mymodule
>>> mymodule.myfunc()
Hello!
>>> from mypackage.mymodule import myfunc
>>> myfunc()
Hello!
>>> from mypackage.mymodule import *
>>> myfunc()
Hello!

Assuming Python 3, there is excellent documentation at: Modules

In terms of naming conventions for packages and modules, the general guidelines are given in PEP-0008 - please see Package and Module Names

Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged.

Set Up Raspberry Pi for SSH over USB

Setting up ssh between Mac and Pi Zero

For users with macOS Sierra (and I think newer):

First, you need to have an RNDIS/Ethernet Gadget interface in the Mac’s Network Preferences.

This is available as standard on Sierra. However, it might not appear automatically in the Network Preferences, and you may have to add it, using the + icon. The list of available interfaces to add will not include RNDIS/Ethernet Gadget unless the Pi Zero is actually attached.

Once added, you should see that it’s Connected, and that

RNDIS/Ethernet Gadget has a self-assigned IP address and will not be able to connect to the Internet.
You should now be able to reach it at raspberry.local, e.g. with ssh pi@raspberrypi.local.

The next step is to give the Pi Internet access; you can do this in Sharing Preferences, and share the Internet connection to the newly-established interface.

Default user: pi
Default pass: raspberry
Default name: raspberrypi

https://www.thepolyglotdeveloper.com/2016/06/connect-raspberry-pi-zero-usb-cable-ssh/
https://raspberrypi.stackexchange.com/questions/64112/macos-not-discovering-raspberry-pi-zero-as-an-usb-ethernet-device?newreg=29d9291dd9c74376a42e18a79194ab51

Compiling ARM assembler for Raspberry Pi

When compiling assembly source code on the raspberry pi we need to do the following: take your source file e.g. source.s and run the as (assembler) command to create an object file:

as -o source.o source.s

This will create an object file named source.o
We now need to link the object files together into one executable file (it’s possible that you will have multiple object files, see below for how to link multiple source files) to do this we run the ld (linker) command:

ld -o source source.o

This will have created an executable file called source. The previous steps will work fine for assembly source files assuming that you have a function called _start within your source.s file which is the entry point for our code.

Another method is to use the gcc compiler which has a built in linker. In this case we generate the object files as above. Once we have the object files we run GCC with the following command.

gcc -o source source.o

This will have the same effect as running the linker. However as we are using the C compiler our source code file needs to acknowlege this by using the entry point named main rather than _start.

If we have functions in seperate source files then we need to create multiple object files. Before we link them. This is acheived by first creating all of our object files. e.g.

as -o source.o source.s
as -o source1.o source1.s
as -o source2.o source2.s
as -o source3.o source3.s

When we have all our object files created we link them all together like:

ld -o source source.o source1.o source2.o source3.o

or using GCC:

gcc -o source source.o source1.o source2.o source3.o

Creating .img files for Bare Metal

If you are coding for bare metal then you will need to convert your file to an image file do be dropped into the OS. I usually call these files kernel.img.

The first thing that we need to do is create a .ELF file. this is done using GCC with a command similar to the following:

gcc -o kernel.elf mykernel.c

We then need to convert the .ELF file to our .IMG file. (a .ELF file is essentially already a binary file it just contains some extra pieces of information.) To convert to a .IMG we use the following or similar:

objcopy kernel.elf -O binary kernel.img

This should provide a kernel.img file that can replace a kernel file on raspbian for example and is essentially a new operating system.

A Note on file formats

.s is our source code in assembly language.
.c is our source code file written in C which is a higher level language than assembly.
.o is an object file, which is a conversion of our source files into machine code, they are a little like libraries in a way as they contain non directly executable code, and need to be compiled into an executable before they can run.
.elf is an executable and linkable format file, and is similar to an object file, but they contain the full program. Whereas a .o file could contain references to external symbols from libraries or other object files.
.img is a binary file of our executable stored as a disc image.

A Note on compiler commands

as starts our assembly compiler. Feed it with .s files.
gcc start our c compiler. Feed it with .c files.
ld starts our linker. This links our object files together that we created with as or gcc into a single binary file. Feed it with .o files.
objcopy converts between binary file formats.

Date and Time in Python

Example Code

1
2
3
4
5
6
7
8
9
10
11
12
13
import time
import datetime

print "Time in seconds since the epoch: %s" %time.time()
print "Current date and time: " , datetime.datetime.now()
print "Or like this: " ,datetime.datetime.now().strftime("%y-%m-%d-%H-%M")
print "Current year: ", datetime.date.today().strftime("%Y")
print "Month of year: ", datetime.date.today().strftime("%B")
print "Week number of the year: ", datetime.date.today().strftime("%W")
print "Weekday of the week: ", datetime.date.today().strftime("%w")
print "Day of year: ", datetime.date.today().strftime("%j")
print "Day of the month : ", datetime.date.today().strftime("%d")
print "Day of week: ", datetime.date.today().strftime("%A")

Output

1
2
3
4
5
6
7
8
9
10
Time in seconds since the epoch: 	1349271346.46
Current date and time: 2012-10-03 15:35:46.461491
Or like this: 12-10-03-15-35
Current year: 2012
Month of year: October
Week number of the year: 40
Weekday of the week: 3
Day of year: 277
Day of the month : 03
Day of week: Wednesday

Roll Your Own Raspberry Pi Kernel

Getting Started.

There are many distirbutions of operating systems avaialble for the Raspberry Pi. But what if you wanted to write your own? Where would you start if you wanted a realtime system without all the overheads of a bulky operating system. This is my work through on getting started with writing my own kernel from scratch for the Raspberry Pi. I will endeavour to write this in C, mainly because it’s a language that I am familiar with, but also as it has a track record of being the language used to write most of the operating systems today, and also because it is the goto language for interfacing directly with hardware which is ultimately what we are trying to do.

There are probably many tutorials out there already that show you how to do something similar. This is not meant to compete with them, and there are certainly no guarantees that it will match them. This blog isn’t really for you. It’s for me. Though you may find it useful.

Of the other tutorials that I have found two really stand out.

Baking Pi - A Tutorial for writing an OS for the Raspberry Pi, mainly in Assembly.
Valvers.com - A tutorial for compiling a system for the Raspberry Pi, written mainly in C.

Cross Compiling

I will be doing the majority of the work on my mac. The thing with doing that is that my mac has an Intel processor, where the Raspberry Pi has a Broadcom ARM chipset. Therefore what is known as a cross compiler is needed.

The GCC ARM Embedded Project on Launchpad provides a GCC toolchain to use on mac. Addtionally you could head to https://developer.arm.com/open-source/gnu-toolchain/gnu-rm which now holds the most recent and up to date compilers. However the easiest way to install the cross compiler on a mac is to use homebrew. To install the gcc cross compiler, first of all install homebrew and then install the compiler with the command brew install gcc-arm-none-eabi-49 You should also install at this time the gnu debugger. brew install gdb-arm-none-eabi
You can now type on the command line arm-none-eabi-gcc and if all is functioning you should see a response similar to the below.

1
2
3
>arm-none-eabi-gcc
arm-none-eabi-gcc: fatal error: no input files
compilation terminated.

Again you can test the debugger by typing arm-none-eabi-gdb If all is well this should open up the debugger on the command line. You can exit by typing quit and return.

The GCC settings for compiling code for the orginal Raspberry Pi can be found on the elinux page.

1
-Ofast -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s

As mentioned on that page, -Ofast may cause issues so it is recommended to use -O2 instead. Also -mcpu=arm1176jzf-s can be used in place of -march=armv6zk -mtune=arm1176jzf-s

For the Raspberry Pi 2 as it has a different architecture. The porcessor has been replaced by a quad core Cortex A7. To compile effectively for this processor the compiler options are:

1
-O2 -mfpu=neon-vfpv4 -mfloat-abi=hard -march=armv7-a -mtune=cortex-a7

The Compiler and Linker

A compiler converts our C program into optimised assembly. No more no less.

The C compiler then asks the assembler to assemble that file into an object file. This will have relocateable machine code within along with symbol information that the linker will use.

The linker’s job is to link all the files together, hence the name linker, into a file that can be executed. The linker requires a linker script which tells the linker how to organise the object files. The linker will then resolve symbols to addresses when it has arranged all the objects according to the rules in the linker script.

Basically there are some things that need to happen before our c file can run. Variables need to be initialised. This is taken care of by an object file which is linked by the linker because the linker script will include a reference to it. The object file is called crt0.o

This code uses symbols that the linker can resolve to clear the start of the area where initialised variables start and end and will zero this memory section. It sets up a stack pointer, and always includes a call to _main. Symbols present in C code get prefixed with an underscore in the assembler version of code. So where the start of a C program is the main symbol, in assembler we refer to it as it’s assembler version which is _main.

The simplest c program

1
2
3
4
5
int main(void) {
while(1) {
}
return 0;
}

This basically does nothing, just implements an infinte loop which will hold all the code that we need to implement our kernel.

We compile with the following for the Broadcom BCM2835 (ARM1176)

1
arm-none-eabi-gcc -O2 -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s mykernel.c

And we compile with the following for the Broadcom BCM2836 (ARM Cortex A7)
1
arm-none-eabi-gcc -O2 -mfpu=vfp -mfloat-abi=hard -march=armv7-a -mtune=cortex-a7 mykernel.c

GCC will compile the source code successfully but will fail with something similar to the following:

1
2
3
/usr/local/Cellar/gcc-arm-none-eabi-49/20150306/bin/../lib/gcc/arm-none-eabi/4.9.3/../../../../arm-none-eabi/lib/fpu/libc.a(lib_a-exit.o): In function `exit':
exit.c:(.text.exit+0x2c): undefined reference to `_exit'
collect2: error: ld returned 1 exit status

Connect to MySQL server in C

Connecting to a mysql database from C is a fairly straightforward process. The following instructions should work on any Linux distro or UNIX computer.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#include <mysql.h>
#include <stdio.h>

main() {
MYSQL *conn;
MYSQL_RES *res;
MYSQL_ROW row;

char *server = "localhost";
char *user = "USER"; /* Enter your mysql username */
char *password = "PASSWORD"; /* Enter your mysql password */
char *database = "mysql";

conn = mysql_init(NULL);

/* Connect to the mysql database */
if (!mysql_real_connect(conn, server,
user, password, database, 0, NULL, 0)) {
fprintf(stderr, "%s\n", mysql_error(conn));
exit(1);
}

/* send SQL query */
if (mysql_query(conn, "show tables")) {
fprintf(stderr, "%s\n", mysql_error(conn));
exit(1);
}

res = mysql_use_result(conn);

/* output all table names */
printf("MySQL Tables in mysql database:\n");
while ((row = mysql_fetch_row(res)) != NULL)
printf("%s \n", row[0]);

/* close our connection */
mysql_free_result(res);
mysql_close(conn);
}

MySQL comes with a script called mysql_config. It provides useful information for compiling your MySQL client and connecting it to a MySQL database server. You need to use following two options.

Pass the libs option i.e. ‘Libraries’ to show required Libraries to link with the MySQL client library.

1
$ mysql_config --libs

Output:

1
-L/usr/local/Cellar/mysql/8.0.13/lib -lmysqlclient -lssl -lcrypto

Pass cflags option ‘Compiler flags’ to find include files and critical compiler flags and defines used when compiling the libmysqlclient library.

1
$ mysql_config --cflags

Output:

1
-I/usr/local/Cellar/mysql/8.0.13/include/mysql

You need to pass above two options to your compiler. So to compile above program, enter:

1
$ gcc $(mysql_config --cflags) mysql.c $(mysql_config --libs)

Now execute program:

1
$ ./a.out

Output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
MySQL Tables in mysql database:
columns_priv
component
db
default_roles
engine_cost
func
general_log
global_grants
gtid_executed
help_category
help_keyword
help_relation
help_topic
innodb_index_stats
innodb_table_stats
password_history
plugin
procs_priv
proxies_priv
role_edges
server_cost
servers
slave_master_info
slave_relay_log_info
slave_worker_info
slow_log
tables_priv
time_zone
time_zone_leap_second
time_zone_name
time_zone_transition
time_zone_transition_type
user

You have successfully connected to and retrieved information from your MySQL database from within a C environment.

^