How to start writing macros in LibreOffice Basic

I have long promised to write about the scripting language Basic and creating macros in LibreOffice. This article is devoted to the types of data used in LibreOffice Basic, and to a greater extent, descriptions of variables and the rules for using them. I will try to provide enough information for advanced as well as novice users.

(And, I would like to thank everyone who commented on and offered recommendations on the Russian article, especially those who helped answer difficult questions.)

Variable naming conventions

Variable names cannot contain more than 255 characters. They should start with either upper- or lower-case letters of the Latin alphabet, and they can include underscores (“_”) and numerals. Other punctuation or characters from non-Latin alphabets can cause a syntax error or a BASIC runtime error if names are not put within square brackets.

Here are some examples of correct variable names:





[My Number]=20.5



[DéjàVu]=“It seems that I have seen it!”

[??? ??????????]=“The first has went!”

[??? % ?? ??????]=0.0001

Note: In examples that contain square brackets, if you remove the brackets, macros will show a window with an error. As you can see, you can use localized variable names. Whether it makes sense to do so is up to you.

Declaring variables

Strictly speaking, it is not necessary to declare variables in LibreOffice Basic (except for arrays). If you write a macro from a pair of lines to work with small documents, you don’t need to declare variables, as the variable will automatically be declared as the variant type. For longer macros or those that will work in large documents, it is strongly recommended that you declare variables. First, it increases the readability of the text. Second, it allows you to control variables that can greatly facilitate the search for errors. Third, the variant type is very resource-intensive, and considerable time is needed for the hidden conversion. In addition, the variant type does not choose the optimal variable type for data, which increases the workload of computer resources.

Basic can automatically assign a variable type by its prefix (the first letter in the name) to simplify the work if you prefer to use the Hungarian notation. For this, the statement DefXXX is used; XXX is the letter type designation. A statement with a letter will work in the module, and it must be specified before subprograms and functions appear. There are 11 types:

DefBool – for boolean variables;
DefInt – for integer variables of type Integer;
DefLng – for integer variables of type Long Integer;
DefSng – for variables with a single-precision floating point;
DefDbl – for variables with double-precision floating-point type Double;
DefCur – for variables with a fixed point of type Currency;
DefStr – for string variables;
DefDate – for date and time variables;
DefVar – for variables of Variant type;
DefObj – for object variables;
DefErr – for object variables containing error information.

If you already have an idea of the types of variables in LibreOffice Basic, you probably noticed that there is no Byte type in this list, but there is a strange beast with the Error type. Unfortunately, you just need to remember this; I have not yet discovered why this is true. This method is convenient because the type is assigned to the variables automatically. But it does not allow you to find errors related to typos in variable names. In addition, it will not be possible to specify non-Latin letters; that is, all names of variables in square brackets that need to be declared must be declared explicitly.

To avoid typos when using declared variables explicitly, you can use the statement OPTION EXPLICIT. This statement should be the first line of code in the module. All other commands, except comments, should be placed after it. This statement tells the interpreter that all variables must be declared explicitly; otherwise, it produces an error. Naturally, this statement makes it meaningless to use the Def statement in the code.

A variable is declared using the statement Dim. You can declare several variables simultaneously, even different types, if you separate their names with commas. To determine the type of a variable with an explicit declaration, you can use either a corresponding keyword or a type-declaration sign after the name. If a type-declaration sign or a keyword is not used after the variable, then the Variant type is automatically assigned to it. For example:

Dim iMyVar                      ‘variable of Variant type
Dim iMyVar1 As Integer, iMyVar2 As Integer ‘
in both cases Integer type
Dim iMyVar3, iMyVar4 As Integer ‘in this case the first variable
is Variant, and the second is Integer

Variable types

LibreOffice Basic supports seven classes of variables:

  • Logical variables containing one of the values: TRUE or FALSE
  • Numeric variables containing numeric values. They can be integer, integer-positive, floating-point, and fixed-point
  • String variables containing character strings
  • Date variables can contain a date and/or time in the internal format
  • Object variables can contain objects of different types and structures
  • Arrays
  • Abstract type Variant

Logical variables – Boolean

Variables of the Boolean type can contain only one of two values: TRUE or FALSE. In the numerical equivalent, the value FALSE corresponds to the number 0, and the value TRUE corresponds to -1 (minus one). Any value other than zero passed to a variable of the Boolean type will be converted to TRUE; that is, converted to a minus one. You can explicitly declare a variable in the following way:

Dim MyBoolVar As Boolean

I did not find a special symbol for it. For an implicit declaration, you can use the DefBool statement. For example:

DefBool b 'variables beginning with b by default are the type Boolean

The initial value of the variable is set to FALSE. A Boolean variable requires one byte of memory.

Integer variables

There are three types of integer variables: Byte, Integer, and Long Integer. These variables can only contain integers. When you transfer numbers with a fraction into such variables, they are rounded according to the rules of classical arithmetic (not to the larger side, as it stated in the help section). The initial value for these variables is 0 (zero).


Variables of the Byte type can contain only integer-positive values in the range from 0 to 255. Do not confuse this type with the physical size of information in bytes. Although we can write down a hexadecimal number to a variable, the word “Byte” indicates only the dimensionality of the number. You can declare a variable of this type as follows:

Dim MyByteVar As Byte

There is no a type-declaration sign for this type. There is no the statement Def of this type. Because of its small dimension, this type will be most convenient for a loop index, the values of which do not go beyond the range. A Byte variable requires one byte of memory.


Variables of the Integer type can contain integer values from -32768 to 32767. They are convenient for fast calculations in integers and are suitable for a loop index. % is a type-declaration sign. You can declare a variable of this type in the following ways:

Dim MyIntegerVar%
Dim MyIntegerVar As Integer

For an implicit declaration, you can use the DefInt statement. For example:

DefInt i 'variables starting with i by default have type Integer

An Integer variable requires two bytes of memory.

Long integer

Variables of the Long Integer type can contain integer values from -2147483648 to 2147483647. Long Integer variables are convenient in integer calculations when the range of type Integer is insufficient for the implementation of the algorithm. & is a type-declaration sign. You can declare a variable of this type in the following ways:

Dim MyLongVar&
Dim MyLongVar As Long

For an implicit declaration, you can use the DefLng statement. For example:

DefLng l 'variables starting with l have Long by default

A Long Integer variable requires four bytes of memory.

Numbers with a fraction

All variables of these types can take positive or negative values of numbers with a fraction. The initial value for them is 0 (zero). As mentioned above, if a number with a fraction is assigned to a variable capable of containing only integers, LibreOffice Basic rounds the number according to the rules of classical arithmetic.


Single variables can take positive or negative values in the range from 3.402823x10E+38 to 1.401293x10E-38. Values of variables of this type are in single-precision floating-point format. In this format, only eight numeric characters are stored, and the rest is stored as a power of ten (the number order). In the Basic IDE debugger, you can see only 6 decimal places, but this is a blatant lie. Computations with variables of the Single type take longer than Integer variables, but they are faster than computations with variables of the Double type. A type-declaration sign is !. You can declare a variable of this type in the following ways:

Dim MySingleVar!
Dim MySingleVar As Single

For an implicit declaration, you can use the DefSng statement. For example:

DefSng f 'variables starting with f have the Single type by default

A single variable requires four bytes of memory.


Variables of the Double type can take positive or negative values in the range from 1.79769313486231598x10E308 to 1.0x10E-307. Why such a strange range? Most likely in the interpreter, there are additional checks that lead to this situation. Values of variables of the Double type are in double-precision floating-point format and can have 15 decimal places. In the Basic IDE debugger, you can see only 14 decimal places, but this is also a blatant lie. Variables of the Double type are suitable for precise calculations. Calculations require more time than the Single type. A type-declaration sign is #. You can declare a variable of this type in the following ways:

Dim MyDoubleVar#
Dim MyDoubleVar As Double

For an implicit declaration, you can use the DefDbl statement. For example:

DefDbl d 'variables beginning with d have the type Double by default

A variable of the Double type requires 8 bytes of memory.


Variables of the Currency type are displayed as numbers with a fixed point and have 15 signs in the integral part of a number and 4 signs in fractional. The range of values includes numbers from -922337203685477.6874 to +92337203685477.6874. Variables of the Currency type are intended for exact calculations of monetary values. A type-declaration sign is @. You can declare a variable of this type in the following ways:

Dim MyCurrencyVar@
Dim MyCurrencyVar As Currency

For an implicit declaration, you can use the DefCur statement. For example:

DefCur c 'variables beginning with c have the type Currency by default

A Currency variable requires 8 bytes of memory.


Variables of the String type can contain strings in which each character is stored as the corresponding Unicode value. They are used to work with textual information, and in addition to printed characters (symbols), they can also contain non-printable characters. I do not know the maximum size of the line. Mike Kaganski experimentally set the value to 2147483638 characters, after which LibreOffice falls. This corresponds to almost 4 gigabytes of characters. A type-declaration sign is $. You can declare a variable of this type in the following ways:

Dim MyStringVar$
Dim MyStringVar As String

For an implicit declaration, you can use the DefStr statement. For example:

DefStr s 'variables starting with s have the String type by default

The initial value of these variables is an empty string (“”). The memory required to store string variables depends on the number of characters in the variable.


Variables of the Date type can contain only date and time values stored in the internal format. In fact, this internal format is the double-precision floating-point format (Double), where the integer part is the number of days, and the fractional is part of the day (that is, 0.00001157407 is one second). The value 0 is equal to 30.12.1899. The Basic interpreter automatically converts it to a readable version when outputting, but not when loading. You can use the Dateserial, Datevalue, Timeserial, or Timevalue functions to quickly convert to the internal format of the Date type. To extract a certain part from a variable in the Date format, you can use the Day, Month, Year, Hour, Minute, or Second functions. The internal format allows us to compare the date and time values by calculating the difference between two numbers. There is no a type-declaration sing for the Date type, so if you explicitly define it, you need to use the Date keyword.

Dim MyDateVar As Date

For an implicit declaration, you can use the DefDate statement. For example:

DefDate y 'variables starting with y have the Date type by default

A Date variable requires 8 bytes of memory.

Types of object variables

We can take two variables types of LibreOffice Basic to Objects.


Variables of the Object type are variables that store objects. In general, the object is any isolated part of the program that has the structure, properties, and methods of access and data processing. For example, a document, a cell, a paragraph, and dialog boxes are objects. They have a name, size, properties, and methods. In turn, these objects also consist of objects, which in turn can also consist of objects. Such a “pyramid” of objects is often called an object model, and it allows us, when developing small objects, to combine them into larger ones. Through a larger object, we have access to smaller ones. This allows us to operate with our documents, to create and process them while abstracting from a specific document. There is no a type-declaration sing for the Object type, so for an explicit definition, you need to use the Object keyword.

Dim MyObjectVar As Object

For an implicit declaration, you can use the DefObj statement. For example:

DefObj o 'variables beginning with o have the type Object by default

The variable of type Object does not store in itself an object but is only a reference to it. The initial value for this type of variables is Null.


The structure is essentially an object. If you look in the Basic IDE debugger, most (but not all) are the Object type. Some are not; for example, the structure of the Error has the type Error. But roughly speaking, the structures in LibreOffice Basic are simply grouped into one object variable, without special access methods. Another significant difference is that when declaring a variable of the Structure type, we must specify its name, rather than the Object. For example, if MyNewStructure is the name of a structure, the declaration of its variable will look like this:

Dim MyStructureVar As MyNewStructure

There are a lot of built-in structures, but the user can create personal ones. Structures can be convenient when we need to operate with sets of heterogeneous information that should be treated as a single whole. For example, to create a tPerson structure:

Type tPerson
  Name As String
  Age As Integer
  Weight As Double
End Type

The definition of the structure should go before subroutines and functions that use it.

To fill a structure, you can use, for example, the built-in structure

Dim oProp As New
OProp.Name = “Age” ‘Set the Name
OProp.Value = “Amy Boyer” ‘
Set the Property

For a simpler filling of the structure, you can use the With operator.

Dim oProp As New
With oProp
  .Name = “Age” ‘Set the Name
  .Value = “Amy Boyer” ‘
Set the Property
End With

The initial value is only for each variable in the structure and corresponds to the type of the variable.


This is a virtual type of variables. The Variant type is automatically selected for the data to be operated on. The only problem is that the interpreter does not need to save our resources, and it does not offer the most optimal variants of variable types. For example, it does not know that 1 can be written in Byte, and 100000 in Long Integer, although it reproduces a type if the value is passed from another variable with the declared type. Also, the transformation itself is quite resource-intensive. Therefore, this type of variable is the slowest of all. If you need to declare this kind of variable, you can use the Variant keyword. But you can omit the type description altogether; the Variant type will be assigned automatically. There is no a type-declaration sign for this type.

Dim MyVariantVar
Dim MyVariantVar As Variant

For an implicit declaration, you can use the DefVar statement. For example:

DefVar v 'variables starting with v have the Variant type by default

This variables type is assigned by default to all undeclared variables.


Arrays are a special type of variable in the form of a data set, reminiscent of a mathematical matrix, except that the data can be of different types and allow one to access its elements by index (element number). Of course, a one-dimensional array will be similar to a column or row, and a two-dimensional array will be like a table. There is one feature of arrays in LibreOffice Basic that distinguishes it from other programming languages. Since we have an abstract type of variant, then the elements of the array do not need to be homogeneous. That is, if there is an array MyArray and it has three elements numbered from 0 to 2, and we write the name in the first element of MyArray(0), the age in the second MyArray(1), and the weight in the third MyArray(2), we can have, respectively, the following type values: String for MyArray(0), Integer for MyArray(1), and Double for MyArray(2). In this case, the array will resemble a structure with the ability to access the element by its index. Array elements can also be homogeneous: Other arrays, objects, structures, strings, or any other data type can be used in LibreOffice Basic.

Arrays must be declared before they are used. Although the index space can be in the range of type Integer—from -32768 to 32767—by default, the initial index is selected as 0. You can declare an array in several ways:

Dim MyArrayVar(5) as string String array with 6 elements from 0 to 5
Dim MyArrayVar$(5) Same as the previous
Dim MyArrayVar(1 To 5) as string String array with 5 elements from 1 to 5
Dim MyArrayVar(5,5) as string Two-dimensional array of rows with 36 elements with indexes in each level
from 0 to 5
Dim MyArrayVar$(-4 To 5, -4 To 5) Two-dimensional strings array with 100 elements with indexes in each level
from -4 to 5
Dim MyArrayVar() Empty array of the Variant type

You can change the lower bound of an array (the index of the first element of the array) by default using the Option Base statement; that must be specified before using subprograms, functions, and defining user structures. Option Base can take only two values, 0 or 1, which must follow immediately after the keywords. The action applies only to the current module.

Learn more

If you are just starting out in programming, Wikipedia provides general information about the array, structure, and many other topics.

For a more in-depth study of LibreOffice Basic, Andrew Pitonyak’s website is a top resource, as is the Basic Programmer’s guide. You can also use the LibreOffice online help. Completed popular macros can be found in the Macros section of The Document Foundation’s wiki, where you can also find additional links on the topic.

For more tips, or to ask questions, visit Ask LibreOffice and OpenOffice forum.

3 steps to reduce a project's failure rate

It’s no secret that clear, concise, and measurable requirements lead to more successful projects. A study about large scale projects by McKinsey & Company in conjunction with the University of Oxford revealed that “on average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted.” The research also showed that some of the causes for this failure were “fuzzy business objectives, out-of-sync stakeholders, and excessive rework.”

Business analysts often find themselves constructing these requirements through ongoing conversations. To do this, they must engage multiple stakeholders and ensure that engaged participants provide clear business objectives. This leads to less rework and more projects with a higher rate of success.

And they can do it in an open and inclusive way.

A framework for success

One tool for increasing project success rate is the Open Decision Framework. The Open Decision Framework is an resource that can help users make more effective decisions in organizations that embrace open principles. The framework stresses three primary principles: being transparent, being inclusive, and being customer-centric.

Transparent. Many times, developers and product designers assume they know how stakeholders use a particular tool or piece of software. But these assumptions are often incorrect and lead to misconceptions about what stakeholders actually need. Practicing transparency when having discussions with developers and business owners is imperative. Development teams need to see not only the “sunny day” scenario but also the challenges that stakeholders face with certain tools or processes. Ask questions such as: “What steps must be done manually?” and “Is this tool performing as you expect?” This provides a shared understanding of the problem and a common baseline for discussion.

Business analysts often find themselves constructing these requirements through ongoing conversations. To do this, they must engage multiple stakeholders and ensure that engaged participants provide clear business objectives.

Inclusive. It is vitally important for business analysts to look at body language and visual cues when gathering requirements. If someone is sitting with arms crossed or rolling their eyes, then it’s a clear indication that they do not feel heard. A BA must encourage open communication by reaching out to those that don’t feel heard and giving them the opportunity to be heard. Prior to starting the session, lay down ground rules that make the place safe for all to speak their opinions and to share their thoughts. Listen to the feedback provided and respond politely when feedback is offered. Diverse opinions and collaborative problem solving will bring exciting ideas to the session.

Customer-centric. The first step to being customer-centric is to recognize the customer. Who is benefiting from this change, update, or development? Early in the project, conduct a stakeholder mapping to help determine the key stakeholders, their roles in the project, and the ways they fit into the big picture. Involving the right customers and assuring that their needs are met will lead to more successful requirements being identified, more realistic (real-life) tests being conducted, and, ultimately, a successful delivery.

When your requirement sessions are transparent, inclusive, and customer-centric, you’ll gather better requirements. And when you use the Open Decision Framework for running those sessions, participants feel more involved and empowered, and they deliver more accurate and complete requirements. In other words:

Transparent + Inclusive + Customer-Centric = Better Requirements = Successful Projects

An overview of the Perl 5 engine

As I described in “My DeLorean runs Perl,” switching to Perl has vastly improved my development speed and possibilities. Here I’ll dive deeper into the design of Perl 5 to discuss aspects important to systems programming.

Some years ago, I wrote “OpenGL bindings for Bash” as sort of a joke. The implementation was simply an X11 program written in C that read OpenGL calls on stdin (yes, as text) and emitted user input on stdout. Then I had a little bash include file that would declare all the OpenGL functions as Bash functions, which echoed the name of the function into a pipe, starting the GL interpreter process if it wasn’t already running. The point of the exercise was to show that OpenGL (the 1.4 API, not the newer shader stuff) could render a lot of graphics with just a few calls per frame by using GL display lists. The OpenGL library did all the heavy lifting, and Bash just printed a few dozen lines of text per frame.

In the end though, Bash is a really horrible glue language, both from high overhead and limited available operations and syntax. Perl, on the other hand, is a great glue language.

Syntax aside…

If you’re not a regular Perl user, the first thing you probably notice is the syntax.

Perl 5 is built on a long legacy of awkward syntax, but more recent versions have removed the need for much of the punctuation. The remaining warts can mostly be avoided by choosing modules that give you domain-specific “syntactic sugar,” which even alter the Perl syntax as it is parsed. This is in stark contrast to most other languages, where you are stuck with the syntax you’re given, and infinitely more flexible than C’s macros. Combined with Perl’s powerful sparse-syntax operators, like map, grep, sort, and similar user-defined operators, I can almost always write complex algorithms more legibly and with less typing using Perl than with JavaScript, PHP, or any compiled language.

So, because syntax is what you make of it, I think the underlying machine is the most important aspect of the language to consider. Perl 5 has a very capable engine, and it differs in interesting and useful ways from other languages.

A layer above C

I don’t recommend anyone start working with Perl by looking at the interpreter’s internal API, but a quick description is useful. One of the main problems we deal with in the world of C is acquiring and releasing memory while also supporting control flow through a chain of function calls. C has a rough ability to throw exceptions using longjmp, but it doesn’t do any cleanup for you, so it is almost useless without a framework to manage resources. The Perl interpreter is exactly this sort of framework.

Perl provides a stack of variables independent from C’s stack of function calls on which you can mark the logical boundaries of a Perl scope. There are also API calls you can use to allocate memory, Perl variables, etc., and tell Perl to automatically free them at the end of the Perl scope. Now you can make whatever C calls you like, “die” out of the middle of them, and let Perl clean everything up for you.

Although this is a really unconventional perspective, I bring it up to emphasize that Perl sits on top of C and allows you to use as much or as little interpreted overhead as you like. Perl’s internal API is certainly not as nice as C++ for general programming, but C++ doesn’t give you an interpreted language on top of your work when you’re done. I’ve lost track of the number of times that I wanted reflective capability to inspect or alter my C++ objects, and following that rabbit hole has derailed more than one of my personal projects.

Lisp-like functions

Perl functions take a list of arguments. The downside is that you have to do argument count and type checking at runtime. The upside is you don’t end up doing that much, because you can just let the interpreter’s own runtime check catch those mistakes. You can also create the effect of C++’s overloaded functions by inspecting the arguments you were given and behaving accordingly.

Because arguments are a list, and return values are a list, this encourages Lisp-style programming, where you use a series of functions to filter a list of data elements. This “piping” or “streaming” effect can result in some really complicated loops turning into a single line of code.

Every function is available to the language as a coderef that can be passed around in variables, including anonymous closure functions. Also, I find sub {} more convenient to type than JavaScript’s function(){} or C++11’s [&](){}.

Generic data structures

The variables in Perl are either “scalars,” references, arrays, or “hashes” … or some other stuff that I’ll skip.

Scalars act as a string/integer/float hybrid and are automatically typecast as needed for the purpose you are using them. In other words, instead of determining the operation by the type of variable, the type of operator determines how the variable should be interpreted. This is less efficient than if the language knows the type in advance, but not as inefficient as, for example, shell scripting because Perl caches the type conversions.

Perl scalars may contain null characters, so they are fully usable as buffers for binary data. The scalars are mutable and copied by value, but optimized with copy-on-write, and substring operations are also optimized. Strings support unicode characters but are stored efficiently as normal bytes until you append a codepoint above 255.

References (which are considered scalars as well) hold a reference to any other variable; hashrefs and arrayrefs are most common, along with the coderefs described above.

Arrays are simply a dynamic-length array of scalars (or references).

Hashes (i.e., dictionaries, maps, or whatever you want to call them) are a performance-tuned hash table implementation where every key is a string and every value is a scalar (or reference). Hashes are used in Perl in the same way structs are used in C. Clearly a hash is less efficient than a struct, but it keeps things generic so tasks that require dozens of lines of code in other languages can become one-liners in Perl. For instance, you can dump the contents of a hash into a list of (key, value) pairs or reconstruct a hash from such a list as a natural part of the Perl syntax.

Object model

Any reference can be “blessed” to make it into an object, granting it a multiple-inheritance method-dispatch table. The blessing is simply the name of a package (namespace), and any function in that namespace becomes an available method of the object. The inheritance tree is defined by variables in the package. As a result, you can make modifications to classes or class hierarchies or create new classes on the fly with simple data edits, rather than special keywords or built-in reflection APIs. By combining this with Perl’s local keyword (where changes to a global are automatically undone at the end of the current scope), you can even make temporary changes to class methods or inheritance!

Perl objects only have methods, so attributes are accessed via accessors like the canonical Java get_ and set_ methods. Perl authors usually combine them into a single method of just the attribute name and differentiate get from set by whether a parameter was given.

You can also “re-bless” objects from one class to another, which enables interesting tricks not available in most other languages. Consider state machines, where each method would normally start by checking the object’s current state; you can avoid that in Perl by swapping the method table to one that matches the object’s state.


While other languages spend a bunch of effort on access rules between classes, Perl adopted a simple “if the name begins with underscore, don’t touch it unless it’s yours” convention. Although I can see how this could be a problem with an undisciplined software team, it has worked great in my experience. The only thing C++’s private keyword ever did for me was impair my debugging efforts, yet it felt dirty to make everything public. Perl removes my guilt.

Likewise, an object provides methods, but you can ignore them and just access the underlying Perl data structure. This is another huge boost for debugging.

Garbage collection via reference counting

Although reference counting is a rather leak-prone form of memory management (it doesn’t detect cycles), it has a few upsides. It gives you deterministic destruction of your objects, like in C++, and never interrupts your program with a surprise garbage collection. It strongly encourages module authors to use a tree-of-objects pattern, which I much prefer vs. the tangle-of-objects pattern often seen in Java and JavaScript. (I’ve found trees to be much more easily tested with unit tests.) But, if you need a tangle of objects, Perl does offer “weak” references, which won’t be considered when deciding if it’s time to garbage-collect something.

On the whole, the only time this ever bites me is when making heavy use of closures for event-driven callbacks. It’s easy to have an object hold a reference to an event handle holding a reference to a callback that references the containing object. Again, weak references solve this, but it’s an extra thing to be aware of that JavaScript or Python don’t make you worry about.


The Perl interpreter is a single thread, although modules written in C can use threads of their own internally, and Perl often includes support for multiple interpreters within the same process.

Although this is a large limitation, knowing that a data structure will only ever be touched by one thread is nice, and it means you don’t need locks when accessing them from C code. Even in Java, where locking is built into the syntax in convenient ways, it can be a real time sink to reason through all the ways that threads can interact (and especially annoying that they force you to deal with that in every GUI program you write).

There are several event libraries available to assist in writing event-driven callback programs in the style of Node.js to avoid the need for threads.

Access to C libraries

Aside from directly writing your own C extensions via Perl’s XS system, there are already lots of common C libraries wrapped for you and available on Perl’s CPAN repository. There is also a great module, Inline::C, that takes most of the pain out of bridging between Perl and C, to the point where you just paste C code into the middle of a Perl module. (It compiles the first time you run it and caches the .so shared object file for subsequent runs.) You still need to learn some of the Perl interpreter API if you want to manipulate the Perl stack or pack/unpack Perl’s variables other than your C function arguments and return value.

Memory usage

Perl can use a surprising amount of memory, especially if you make use of heavyweight libraries and create thousands of objects, but with the size of today’s systems it usually doesn’t matter. It also isn’t much worse than other interpreted systems. My personal preference is to only use lightweight libraries, which also generally improve performance.

Startup speed

The Perl interpreter starts in under five milliseconds on modern hardware. If you take care to use only lightweight modules, you can use Perl for anything you might have used Bash for, like hotplug scripts.

Regex implementation

Perl provides the mother of all regex implementations… but you probably already knew that. Regular expressions are built into Perl’s syntax rather than being an object-oriented or function-based API; this helps encourage their use for any text processing you might need to do.

Ubiquity and stability

Perl 5 is installed on just about every modern Unix system, and the CPAN module collection is extensive and easy to install. There’s a production-quality module for almost any task, with solid test coverage and good documentation.

Perl 5 has nearly complete backward compatibility across two decades of releases. The community has embraced this as well, so most of CPAN is pretty stable. There’s even a crew of testers who run unit tests on all of CPAN on a regular basis to help detect breakage.

The toolchain is also pretty solid. The documentation syntax (POD) is a little more verbose than I’d like, but it yields much more useful results than doxygen or Javadoc. You can run perldoc FILENAME to instantly see the documentation of the module you’re writing. perldoc Module::Name shows you the specific documentation for the version of the module that you would load from your include path and can likewise show you the source code of that module without needing to browse deep into your filesystem.

The testcase system (the prove command and Test Anything Protocol, or TAP) isn’t specific to Perl and is extremely simple to work with (as opposed to unit testing based around language-specific object-oriented structure, or XML). Modules like Test::More make writing the test cases so easy that you can write a test suite in about the same time it would take to test your module once by hand. The testing effort barrier is so low that I’ve started using TAP and the POD documentation style for my non-Perl projects as well.

In summary

Perl 5 still has a lot to offer despite the large number of newer languages competing with it. The frontend syntax hasn’t stopped evolving, and you can improve it however you like with custom modules. The Perl 5 engine is capable of handling most programming problems you can throw at it, and it is even suitable for low-level work as a “glue” layer on top of C libraries. Once you get really familiar with it, it can even be an environment for developing C code.

My DeLorean runs Perl

My signature hobby project these days is a computerized instrument cluster for my car, which happens to be a DeLorean. But, whenever I show it to someone, I usually have to give them a while to marvel at the car before they even notice that there’s a computer screen in the dashboard. There’s a similar problem when I start describing the software; programmers immediately get hung up on “Why Perl???” when they learn that the real-time OpenGL rendering of instrument data is all coded in Perl. So, any discussion of my project usually starts with the history of the DeLorean or a discussion of the merits of Perl vs. other, more-likely tools.

I started the project in 2010 with the concept to integrate a computer in the dashboard to act as a personal assistant, but it quickly became a project about replacing the stock instrument cluster with something software-rendered. Based on the level of processing I wanted (I dream big) and the size of screen I wanted, I decided against the usual high-end microcontrollers people might use and instead went with a full Linux PC and desktop monitor with low-end microcontroller to read the analog measurements from the car. I was doing OpenGL and C++ at work at the time, so that was my first pick for software. I could write multiple articles about hardware selection, but I’ll try to stay focused on the software for this one. (You can find  more of that story on my website,

After several years of effort, it became apparent that C++ is not a good fit for my large-scale personal projects. Although C++ yields great performance and low resource usage, the biggest resource shortage I had was time and “mental state.” Sometimes I would be away from the project for an entire month, and when I finally had a single day of free time to work on it, I spent it trying to remember where I left off. The worst aspect was that I usually couldn’t finish refactoring my design in a single session, so when I came back to it weeks later, I wasn’t catching all the places where the design change had broken the code. Also, while C++ is generally better than C for catching bugs, I would still end up with occasional memory corruption that could eat up hours of debugging time. There’s also just a lot of development overhead to write the logging and debugging routines needed to diagnose a real-time, multi-threaded application.

Meanwhile, my day job had shifted to working on Perl. I didn’t seek Perl on my own; it was just sort of thrust my way along with urgent projects. However, within a few months I was intrigued by its possibilities, and now it’s my favorite language.

Enter Perl

In 2014, I took the plunge and rewrote the instrument cluster software in Perl. After years of trudging along with C++ I was able to get a working prototype (of the software, at least) within a few months, and move to completing the hardware and microcontroller in 2015.

My little Perl success story is primarily about agility. I’m not really a buzzword fan or the kind of guy who reads books about methodologies, but “agile” definitely means something to me now. I feel like Perl hits a magic sweet spot of providing enough structure to build a correct, performant program, while being minimal and generic enough to plug things together with ease, and even offering enough syntax features to express complex operations in terse but readable code. (If you aren’t familiar with Perl’s capabilities, see my companion article “Perl from a Systems Programmer Perspective,” which elaborates on how Perl can be suited for systems work.)

The main, ongoing benefit is the ability to make ad-hoc changes. Because I don’t have a lot of time to plan out the full requirements of my objects, it has been a great boost to productivity to just toss in an additional few attributes on unsuspecting objects, or quickly sort through a list of objects based on criteria that would require awkward reflection code in Java or C++. If I decide I like the change, I go back and rewrite it with properly declared attributes and interfaces. I’ve found I can author a new graphic widget, complete with animations, in less than an hour.


One of the real killers for the C++ version of my project was keeping all the binary-level code in sync. The various components (rendering, message bus, logic core, microcontroller firmware, control tools, debug tools) were all sharing binary data structures, and keeping the dependencies straight in the makefile was a headache. I’m personally sour toward the automake family of tools, so whenever I needed to do something odd (like compile the microcontroller code using avr-gcc), I would risk getting frustrated and detouring into a new grand scheme to create a replacement for autotools (certainly a thing I don’t need to waste time on).

During my change to Perl, I converted the microcontroller to show up as a Linux serial device and changed the protocol to strings of short text notation. (The messages are actually smaller than the binary packet structs I had been using before.) This let me debug it with a simple socat on /dev/ttyS0. It also simplified the daemon that talks to the microcontroller. The C++ version was written with two threads, since I was using libusb, and its easiest mode of operation has a blocking read method. The Perl version simply opens a stream to the character device and reads lines of text.

I made a similar change to the host-side communication and had the daemon generate lines of JSON instead of binary packets. Since it is so incredibly easy to implement this in Perl with libraries like AnyEvent, I ditched the “message bus” idea entirely and just had each program create its own Unix socket, to which other programs can connect as needed. Debugging a single thread is much less painful, and there wasn’t even much debugging to do anyway, because AnyEvent does most of the work for me.

With everything passed around as JSON, there are no longer any message structs to worry about. None of my Perl programs requires a make process anymore, so the only piece of the project that still has a makefile is the microcontroller firmware, and it is simple enough that I just wrote it out by hand.


Processing low-level math directly with Perl can be slow, but the best way to use Perl where performance counts is to glue together C libraries. Perl has an extension system called XS to help you bind C code to Perl functions, but even better, there’s a CPAN repository module called Inline, which lets you paste C or C++ (and others) directly into a Perl module, and it compiles the first time the module is loaded. (But, yes, I pre-compile them before building the firmware image for the car.)

Thanks to Inline, I can move code back and forth from Perl to C as needed without messing around with library versions. I was able to bring over some of my C++ classes directly into the new Perl version of the instrument cluster. I was also able to wrap the C++ objects of the FreeType for OpenGL (FTGL) library, which is an important piece I didn’t want to have to re-invent.

The CPU usage of the system was about 15% with the C++ implementation. With Perl it’s about 40%. Almost all of that is the rendering code, so if I need to I can always push more of it back into C++. But, I could also just upgrade the computer, and 40% isn’t even a problem because I’m maintaining a full 60 frames per second (and I’m running a 6.4-watt processor).

Broader horizons

Perl’s CPAN public package repository is especially large, documented, tested, and stable compared to other languages. Naturally this depends on the individual authors (and there are plenty of counter-examples), but I’ve been impressed with the pervasive culture of test coverage and helpful documentation. Installing and using new Perl modules is also ridiculously easy. Not only do I avoid the toolchain efforts of C/C++, I get the advantage of Perl authors who have already overcome conflicting thread models or event loops or logging systems to give me a plugin experience.

With everything written in Perl, I can just grab anything I like off CPAN. For instance, I could have the car send me emails or text messages, host a web app for controlling features via phone, write Excel files of fuel mileage, and so on. I haven’t started on these features yet, but it feels nice that the barriers are gone.

Contributing back

In a decade of doing C++, I never once released a library for public consumption. A lot of it is due to the extreme awkwardness of autotools, and the fact that just creating a system-installed C++ library is a royal pain even without packaging it up properly for distribution.

Perl makes module authoring and testing and documentation extremely easy. It is so easy that I wrote test cases and documentation for my Math-InterpolationCompiler for my own benefit, and then published them on CPAN because, “why not?” I also became maintainer of X11-Xlib and greatly expanded its API, and then wrote X11-GLX so that I could finally have all my OpenGL setup code in proper order. (This was also part of my attempt to make the instrument renderer into a compositing window manager, which turned out to be much harder than I expected.) Currently, I’m working on making my maps/navigation database a CPAN module as well.

But why not…

“But, why not Language X” you say, with “Python” a common value for X. Well, for one, I know a lot more Perl than Python. I’m using a lot of deep and advanced Perl features, so picking up Python would be another large learning curve. I’m also partial to Perl’s toolchain, especially elements like prove and perldoc. I suspect it’s possible to do it all in Python as well, but I have no compelling reason to switch. For any other language X… well no other language can match the wealth of packages that Perl or Python offer, so I’m less inclined to experiment with them. I could mix languages, since my project is comprised of multiple processes, but having everything in the same language means I can more easily share code between programs.

“Why not Android?” is another common question. Indeed, a tablet is a much more embeddable device than a whole PC, and it comes with access to mapping apps. The obvious first problem is, I’d be back on Java and lose most of my prized agility. Second, I’m not aware of any way to merge the graphics of separate apps (such as using Google Maps as a texture within the dashboard), although there might be one. And third, I’ve been working on a feature to take video feeds and tie them directly into the graphics as textures. I don’t know of any tablets that could capture video from external sources in real time at a low enough latency, much less directly into a graphics texture buffer. Linux desktop software is much more open to this sort of deep mangling, so I’ll probably continue with it.

On the whole, I’m just happy I’ve finished enough that I can drive my DeLorean.

A school in India defies the traditional education model

Located in a sleepy village just two hours away from the bustling metropolis of Mumbai is a school that defies traditional educational models by collaboratively owning, building, and sharing knowledge and technology. The school uses only open source software and hardware in its approach to learning, and takes pride in the fact that none of its students have used or even seen proprietary software, including the ubiquitous Windows operating system.

The Tamarind Tree School, located in Dahanu Taluka, Maharashtra, India, is an experiment in open education. Open education is a philosophy about how people produce, share, and build on knowledge and technology, advocating a world in which education is for social good, and everyone has equal opportunity and access to education, training, and knowledge.

Why open education?

The school’s founders believe that the commodification and ownership of knowledge is the primary reason for the inequity in access to quality educational resources. While the Internet may have created a proliferation of digital content and learning tools, the relationship between technology creation, knowledge building, access, and ownership remains skewed for most people in society.

The trend toward expensive primary schools in India, copyrights on learning videos, academic journals, and software, “free” educational apps, and the manufacturing of laptops and devices support the idea that knowledge is owned and controlled by a few.

Many people confuse free usage with free access. But freedom such as ownership and collaboration among users is reduced or eliminated when learning communities do not feel empowered to build their own digital devices, set up their own networks, or create their own digital learning tools. As a result, many learners unknowingly become thieves (as seen in the rampant use of pirated software in India) or compromise their fundamental freedom to own and engage with the digital world on their terms. This reality is even more grim in rural India, where disadvantaged communities are denied access or equal opportunity to the digital world.

How do we create a world where everyone enjoys access to quality education? One approach is to fundamentally change the way knowledge and technology are owned and controlled.

The open source movement offers a solution.

Open education is based on the premise that knowledge should be collaboratively built and shared by all. It believes in creating producers and collaborators of knowledge rather than consumers of it.

How we implement open education

Based on these values and philosophies, the Tamarind Tree school has been experimenting with several open source options:

1. Single-board computers

The school has been able to avoid proprietary hardware, thanks to the work of organizations around the world that build single-board computers. A single-board computer (SBC) is a complete computer built on a single circuit board, complete with microprocessor(s), memory, input/output (I/O), and other required features.

The school selected a robust, affordable SBC built by the Raspberry Pi Foundation, and uses it to teach children programming skills and computational thinking. Students at Tamarind Tree enjoy coding and programming using the visual programming tool Scratch on these hardy open source machines.

2. Open source gamified software and open educational resources

The school, which uses only open educational resources (OERs), employs a combination of open digital tools like Gcompris, Tux Math, Tux Paint, Jfraction, and programs from the open source KDE Community to teach English, math, and science in a fun, interactive manner.

3. My Big Campus learning management system

To enable relevant, contextual learning, Tamarind Tree set up its own learning management system, which is hosted on the open source platform Moodle. Students as young as 7 years old can log on to their courses, along with a facilitator, and are guided to different online and offline activities. The system also supports individualized learning. The curriculum hosted at My Big Campus is derived from the National Council of Educational Research and Training in New Delhi. Students enjoy answering quizzes, commenting on images and blogs, creating digital art, and more. Courses are created contextually, grading can be done online, and students can learn at their own pace.

4. E-library

Tamarind Tree also has a facility where any student with a digital device can read books, articles, or news reports from a collection of more than 3,000 resources hosted on the school’s e-library server. The e-library, which is updated continuously, has been set up on the single-board computer and uses the Calibre open source library management system to organize, tag, and upload resources. All books hosted on the server are in the public domain or hold a Creative Commons license.

As students build knowledge by creating and playing their own computer games and participating in other educational activities, teachers can customize course materials to fit the needs of individual learners through digital content and local resources. The school’s goal is to establish that knowledge and technology can be entirely built, owned, and controlled by learning communities by using open source educational resources.

Is the future of education open?

Open education can help build a society that can provide free and open access to education and knowledge for all people with a desire to learn. The Tamarind Tree School demonstrates the potential of creating an educational model that believes in the democratization of knowledge.

An introduction to Eclipse MicroProfile

Enterprise Java has been defined by two players: Spring on one side and Java Enterprise Edition on the other. The Java EE set of specifications was developed in the Java Community Process under the stewardship of Oracle. The current Java EE 8 was released in September 2017; the prior version came out in 2013.

Between those releases, the industry saw a lot of change, most notably containers, the ubiquitous use of JSON, HTTP/2, and microservices architectures. Unfortunately there was not much related activity around Java EE; but users of the many Java EE-compliant servers demanded adoption of those new technologies and paradigms.

As a result, a group of vendors and community members founded MicroProfile to develop new specifications for using Java EE in microservice architectures that could be added into future versions of Java EE.

The first release of MicroProfile, in summer 2016, included three existing standards to serve as a baseline. At the end of 2016, MicroProfile joined the Eclipse Foundation (which explains Eclipse in the name) to leverage Eclipse’s strong governance and intellectual property expertise.

In 2017, there were two additional releases, and the next one is right around the corner. MicroProfile aims to release an update roughly every three months with specific content in a time-boxed way. Releases consist of a series of specifications, each developed at its own pace, and the umbrella release contains all of the specifications’ current versions.

What’s in the box?

Sweets for my sweet, sugar for my honey.

Well, luckily not, as too much sugar is bad for your health. But the individual specifications do have some pretty tasty content. Development of new specifications started after the first release.

The specifications that make up MicroProfile 1.2, which was released at JavaOne 2017, are:

  • Metrics: Deals with telemetry data and how it is exposed in a uniform way. This includes data from the underlying Java virtual machine as well as data from applications.
  • Health: Reports whether a service is healthy. This is important for schedulers like Kubernetes to determine if an application (container) should be killed and a new one started.
  • Config: Provides a uniform way of relaying configuration data into the application independent of the configuration source.
  • Fault tolerance: Includes mechanisms to make microservices resilient to failures in the network or other services they rely on, such as defining timeouts for calls to remote services, retrying policies in case of failure, and setting fallback methods.
  • JWT propagation: JSON Web Token (JWT) is a token-based authentication/authorization system that allows to authenticate, authorize, and verify identities based on a security token. JWT propagation defines the interoperability and container integration requirements for JWT for use with Java EE style role-based access control.

The just-released MicroProfile 1.3 includes updates to some of the above and adds the following new specifications:

  • OpenTracing: A mechanism for distributed tracing of calls across a series of microservices.
  • OpenAPI: A way to document data models and REST APIs so they can be read by machines and automatically build client code from this documentation. OpenAPI was derived from the Swagger specification.
  • REST client: A type-safe REST client that builds on the standard JAX-RS client to do more heavy lifting so consumer code can rely on strongly typed data and method invocations.

Upcoming releases are expected to pick up some APIs and new API versions from Java EE 8, such as JSON-B 1.0, JSON-P 1.1, CDI 2.0, and JAX-RS 2.1.

Where can I learn more?

How can I get involved?

The main communication channel is the MicroProfile discussion group. All specifications have a GitHub repository under the Eclipse organization, so they are using GitHub issues and pull requests. Also, each specification usually has a Gitter discussion group.

If you have an idea for a new MicroProfile specification, join the discussion group, present your idea, and hack away. Once others support your idea, a new repository will be created, and the more formal process can begin.

How to generate webpages using CGI scripts

Back in the stone age of the Internet when I first created my first business website, life was good.

I installed Apache and created a few simple HTML pages that stated a few important things about my business and gave important information like an overview of my product and how to contact me. It was a static website because the content seldom changed. Maintenance was simple because of the unchanging nature of my site.

Static content

Static content is easy and still common. Let’s take a quick look at a couple sample static web pages. You don’t need a working website to perform these little experiments. Just place the files in your home directory and open them with your browser. You will see exactly what you would if the file were served to your browser via a web server.

The first thing you need on a static website is the index.html file which is usually located in the /var/www/html directory. This file can be as simple as a text phrase such as “Hello world” without any HTML markup at all. This would simply display the text string. Create index.html in your home directory and add “Hello world” (without the quotes) as it’s only content. Open the index.html in your browser with the following URL.


So HTML is not required, but if you had a large amount of text that needed formatting, the results of a web page with no HTML coding would be incomprehensible with everything running together.

So the next step is to make the content more readable by using a bit of HTML coding to provide some formatting. The following command creates a page with the absolute minimum markup required for a static web page with HTML. You could also use your favorite editor to create the content.

echo "<h1>Hello World</h1>" > test1.html

Now view index.html and see the difference.

Of course you can put a lot of additional HTML around the actual content line to make a more complete and standard web page. That more complete version as shown below will still display the same results in the browser, but it also forms the basis for more standardized web site. Go ahead and use this content for your index.html file and display it in your browser.

I built a couple static websites using these techniques, but my life was about to change.

Dynamic web pages for a new job

I took a new job in which my primary task was to create and maintain the CGI (Common Gateway Interface) code for a very dynamic website. In this context, dynamic means that the HTML needed to produce the web page on a browser was generated from data that could be different every time the page was accessed. This includes input from the user on a web form that is used to look up data in a database. The resulting data is surrounded by appropriate HTML and displayed on the requesting browser. But it does not need to be that complex.

Using CGI scripts for a website allows you to create simple or complex interactive programs that can be run to provide a dynamic web page that can change based on input, calculations, current conditions in the server, and so on. There are many languages that can be used for CGI scripts. We will look at two of them, Perl and Bash. Other popular CGI languages include PHP and Python.

This article does not cover installation and setup of Apache or any other web server. If you have access to a web server that you can experiment with, you can directly view the results as they would appear in a browser. Otherwise, you can still run the programs from the command line and view the HTML that would be created. You can also redirect that HTML output to a file and then display the resulting file in your browser.

Using Perl

Perl is a very popular language for CGI scripts. Its strength is that it is a very powerful language for the manipulation of text.

To get CGI scripts to execute, you need the following line in the in httpd.conf for the website you are using. This tells the web server where your executable CGI files are located. For this experiment, let’s not worry about that.

ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

Add the following Perl code to the file index.cgi, which should be located in your home directory for your experimentation. Set the ownership of the file to apache.apache when you use a web server, and set the permissions to 755 because it must be executable no matter where it is located.

print “Content-type: text/html\n\n;
print “<html><body>\n;
print “<h1>Hello World</h1>\n;
print “Using Perl<p>\n;
print “</body></html>\n;

Run this program from the command line and view the results. It should display the HTML code it will generate.

Now view the index.cgi in your browser. Well, all you get is the contents of the file. Browsers really need to have this delivered as CGI content. Apache does not really know that it needs to run the file as a CGI program unless the Apache configuration for the web site includes the “ScriptAlias” definition as shown above. Without that bit of configuration Apache simply send the data in the file to the browser. If you have access to a web server, you could try this out with your executable index files in the /var/www/cgi-bin directory.

To see what this would look like in your browser, run the program again and redirect the output to a new file. Name it whatever you want. Then use your browser to view the file that contains the generated content.

The above CGI program is still generating static content because it always displays the same output. Add the following line to your CGI program immediately after the “Hello World” line. The Perl “system” command executes the commands following it in a system shell, and returns the result to the program. In this case, we simply grep the current RAM usage out of the results from the free command.

system "free | grep Mem\n";

Now run the program again and redirect the output to the results file. Reload the file in the browser. You should see an additional line so that displays the system memory statistics. Run the program and refresh the browser a couple more times and notice that the memory usage should change occasionally.

Using Bash

Bash is probably the simplest language of all for use in CGI scripts. Its primary strength for CGI programming is that it has direct access to all of the standard GNU utilities and system programs.

Rename the existing index.cgi to Perl.index.cgi and create a new index.cgi with the following content. Remember to set the permissions correctly to executable.

echo “Content-type: text/html”
echo “”
echo ‘<html>’
echo ‘<head>’
echo ‘<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″>’
echo ‘<title>Hello World</title>’
echo ‘</head>’
echo ‘<body>’
echo ‘<h1>Hello World</h1><p>’
echo ‘Using Bash<p>’
free | grep Mem
echo ‘</body>’
echo ‘</html>’
exit 0

Execute this program from the command line and view the output, then run it and redirect the output to the temporary results file you created before. Then refresh the browser to view what it looks like displayed as a web page.


It is actually very simple to create CGI programs that can be used to generate a wide range of dynamic web pages. This is a trivial example but you should now see some of the possibilities.  

Create custom wallpaper slideshows in GNOME

A very cool, yet lesser known, feature in GNOME is its ability to display a slideshow as your wallpaper. You can select a wallpaper slideshow from the background settings panel in the GNOME Control Center. Wallpaper slideshows can be distinguished from static wallpapers by a small clock emblem displayed in the lower-right corner of the preview.

Some distributions come with pre-installed slideshow wallpapers. For example, Ubuntu includes the stock GNOME timed wallpaper slideshow, as well as one of Ubuntu wallpaper contest winners.

What if you want to create your own custom slideshow to use as a wallpaper? While GNOME doesn’t provide a user interface for this, it’s quite simple to create one using some simple XML files in your home directory. Fortunately, the background selection in the GNOME Control Center honors some common directory paths, which makes it easy to create a slideshow without having to edit anything provided by your distribution.

Getting started

Using your favorite text editor, create an XML file in $HOME/.local/share/gnome-background-properties/. Although the filename isn’t important, the directory name matters (and you’ll probably have to create the directory). For my example, I created /home/ken/.local/share/gnome-background-properties/osdc-wallpapers.xml with the following content:

<?xml version=“1.0” encoding=“UTF-8”?>
<!DOCTYPE wallpapers SYSTEM “gnome-wp-list.dtd”>
 <wallpaper deleted=“false”>
   <name> Wallpapers</name>

The above XML file needs a <wallpaper> stanza for each slideshow or static wallpaper you want to include in the backgrounds panel of the GNOME Control Center.

In this example, my osdc.xml file looks like this:

<?xml version=“1.0” ?>
    <!— Duration in seconds to display the background —>
    <!— Duration of the transition in seconds, default is 2 seconds —>

There are a few important pieces in the above XML. The <background> node in the XML is your outer node. Each background supports multiple <static> and <transition> nodes.

The <static> node defines an image to be displayed and the duration to display it with <duration> and <file> nodes, respectively.

The <transition> node defines the <duration>, the <from> image, and the <to> image for each transition.

Changing wallpaper throughout the day

Another cool GNOME feature is time-based slideshows. You can define the start time for the slideshow and GNOME will calculate times based on it. This is useful for setting different wallpapers based on the time of day. For example, you could set the start time to 06:00 and display one wallpaper until 12:00, then change it for the afternoon, and again at 18:00.

This is accomplished by defining the <starttime> in your XML like this:

    <!— A start time in the past is fine —>

The above XML started the animation at 06:00 on November 21, 2017, with a duration of 21,600.00, equal to six hours. This displays your morning wallpaper until 12:00, at which time it changes to your next wallpaper. You can continue in this manner to change the wallpaper at any intervals you’d like throughout the day, but ensure the total of all your durations is 86,400 seconds (equal to 24 hours).

GNOME will calculate the delta between the start time and the current time and display the correct wallpaper for the current time. For example, if you select your new wallpaper at 16:00, GNOME will display the proper wallpaper for 36,000 seconds past the start time of 06:00.

For a complete example, see the adwaita-timed slideshow provided by the gnome-backgrounds package in most distributions. It’s usually found in /usr/share/backgrounds/gnome/adwaita-timed.xml.

For more information

Hopefully this encourages you to take a dive into creating your own slideshow wallpapers. If you would like to download complete versions of the files referenced in this article, they can be found on GitHub.

If you’re interested in utility scripts for generating the XML files, you can do an internet search for gnome-background-generator.

Paying it forward at Finland's Aalto Fablab

Originating at MIT, a fab lab is a technology prototyping platform where learning, experimentation, innovation, and invention are encouraged through curiosity, creativity, hands-on making, and most critically, open knowledge sharing. Each fab lab provides a common set of tools (including digital fabrication tools like laser cutters, CNC mills, and 3D printers) and processes, so you can learn how to work in a fab lab anywhere and use those skills at any of the 1,000+ fab labs across the globe. There is probably a fab lab near you.

Fab labs can be found anywhere avant-garde makers and hackers live, but they have also cropped up at libraries and other public spaces. For example, the Aalto Fablab, the first fab lab in Finland, is in the basement of Aalto University’s library, in Espoo. Solomon Embafrash, the studio master, explains, “Aalto Fablab was in the Arabia campus with the School of Arts and Design since 2011. As Aalto decided to move all the activities concentrated in one campus (Otaniemi), we decided that a dedicated maker space would complement the state-of-the-art library in the heart of Espoo.”

The library, which is now a full learning center, sports a maker space that consists of a VR hub, a visual resources center, a studio, and of course, the Fablab. With the expansion of the Helsinki metro to a new station across the street from the Aalto Fablab, everyone in the region now has easy access to it.

The Fab Lab Charter states: “Designs and processes developed in fab labs can be protected and sold however an inventor chooses, but should remain available for individuals to use and learn from.” The “protected” part does not quite meet the requirements set by the Open Source Hardware Association’s definition of open source hardware; however, for those not involved in commercialization of products, the code is available for a wide range of projects created in fab labs (like the FabFi, an open source wireless network).

That means fab labs are effectively feeding the open source ecosystem that allows digitally distributed manufacturing of a wide range of products as many designers choose to release their designs with fully free licenses. Even the code to create a fab lab is also openly shared by the U.S. non-profit Fab Foundation.

All fab labs are required to provide open access to the community; however, some, like the Aalto Fablab, take that requirement one step further. The Aalto Fablab is free to use, but if you wish to use bulk materials from its stock for your project—for example, to make a new chair—you need to pay for them. You are also expected to respect the philosophy of open knowledge-sharing by helping others, documenting your work, and sharing what you have learned. Specifically, the Aalto Fablab asks that you “pay forward” what you have learned to other users, who may be able to build upon your work and help speed development.

All fab labs are required to provide open access to the community.

Embafrash adds, “There is a very old tradition of free services in Finland, like the library service and education. We used to charge users a few cents for the material cost of the 3D prints, but we found that it makes a lot of sense to keep it free, as it encourages people to our core philosophy of Fablab, which is idea sharing and documentation.”

This approach has proven successful, fostering enormous interest in the local community for making and sharing. For example, the Unseen Art project, an open source platform that allows the visually impaired to enjoy 3D printed art, started in the Aalto Fablab.

Fablab members organize local Maker Faire events and work closely with the maker community, local schools, and other organizations. “The Fablab has open days, which are very popular times that people from outside the university get access to the resources, and our students get the exposure to work with people outside the school community,” Embafrash says.

In this way, the more they share, the more their university benefits.

This article was supported by Fulbright Finland, which is currently sponsoring my research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget

The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If the extracting directories to a file that requires root permissions, use the sudo command to run tar.

$ sudo wget -c -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c && tar -xzf  GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c && sudo tar -xzf  GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl | sudo tar -xz  -C /etc/nginx/
$ sudo curl && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.