Amplify – NGINX Monitoring Made Easy

Nginx amplify is a collection of useful tools for extensively monitoring a open source Nginx web server and NGINX Plus. With NGINX Amplify you can monitor performance, keep track of systems running Nginx and enables for practically examining and fixing problems associated with running and scaling web applications.

It can be used to visualize and determine a Nginx web server performance bottlenecks, overloaded servers, or potential DDoS attacks; enhance and optimize Nginx performance with intelligent advice and recommendations.

In addition, it can notify you when something is wrong with the any of your application setup, and it also serves as a web application capacity and performance planner.

The Nginx amplify architecture is built on 3 key components, which are described below:

  • NGINX Amplify Backend – the core system component, implemented as a SaaS (Software as a Service). It incorporates scalable metrics collection framework, a database, an analytics engine, and a core API.
  • NGINX Amplify Agent – a Python application which should be installed and run on monitored systems. All communications between the agent and the SaaS backend are done securely over SSL/TLS; all traffic is always initiated by the agent.
  • NGINX Amplify Web UI – a user interface compatible with all major browsers and it is only accessible only via TLS/SSL.


The web UI displays graphs for Nginx and operating system metrics, allows for the creation of a user-defined dashboard, offers a static analyzer to improve Nginx configuration and an alert system with automated notifications.

Step 1: Install Amplify Agent on Linux System

1. Open your web browser, type the address below and create an account. A link will be sent to your email, use it to verify the email address andlogin to your new account.

https://amplify.nginx.com

2. After that, log into your remote server to be monitored, via SSH and download the nginx amplify agent auto-install script using curl or wget command.

$ wget https://github.com/nginxinc/nginx-amplify-agent/raw/master/packages/install.sh
OR
$ curl -L -O https://github.com/nginxinc/nginx-amplify-agent/raw/master/packages/install.sh 

3. Now run the command below with superuser privileges using the sudo command, to install the amplify agent package (the API_KEY will probably be different, unique for every system that you add).

$ sudo API_KEY='e126cf9a5c3b4f89498a4d7e1d7fdccf' sh ./install.sh 
Install Nginx Amplify Agent

Install Nginx Amplify Agent

Note: You will possibly get an error indicating that sub_status has not been configured, this will be done in the next step.

4. Once the installation is complete, go back to the web UI and after about 1 minute, you will be able to see the new system in the list on the left.

Step 2: Configure stub_status in NGINX

5. Now, you need to setup stub_status configuration to build key Nginx graphs (Nginx Plus users need to configure either the stub_status module or the extended status module).

Create a new configuration file for stub_status under /etc/nginx/conf.d/.

$ sudo vi /etc/nginx/conf.d/sub_status.conf

Then copy and paste the following stub_status configuration in the file.

server {
listen 127.0.0.1:80;
server_name 127.0.0.1;
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}

Save and close the file.

6. Next, restart Nginx services to activate the stub_status module configuration, as follows.

$ sudo systemctl restart nginx

Step 3: Configure Additional NGINX Metrics for Monitoring

7. In this step, you need to setup additional Nginx metrics to keep a close eye on your applications performance. The agent will gather metrics from active and growing access.log and error.log files, whose locations it automatically detects. And importantly, it should be allowed to read these files.

All you have to do is define a specific log_format as the one below in your main Nginx configuration file, /etc/nginx/nginx.conf.

log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$host" sn="$server_name" ' 'rt=$request_time ' 'ua="$upstream_addr" us="$upstream_status" ' 'ut="$upstream_response_time" ul="$upstream_response_length" ' 'cs=$upstream_cache_status' ;

Then use the above log format when defining your access_log and the error_log log level should be set to warn as shown.

access_log /var/log/nginx/suasell.com/suasell.com_access_log main_ext;
error_log /var/log/nginx/suasell.com/suasell.com_error_log warn;

8. Now restart Nginx services once more, to effect the latest changes.

$ sudo systemctl restart nginx

Step 4: Monitor Nginx Web Server Via Amplify Agent

9. Finally, you can begin monitoring your Nginx web server from the Amplify Web UI.

Nginx Amplify Overview

Nginx Amplify Overview

Nginx Amplify Graph

Nginx Amplify Graph

To add a another system to monitor, simply go to Graphs and click on “New System” and follow the steps above.

Nginx Amplify Homepage: https://amplify.nginx.com/signup/

Amplify is a powerful SaaS solution for monitoring your OS, Nginx web server as well as Nginx based applications. It offers a single, unified web UI for keeping an eye on multiple remote systems running Nginx. Use the comment form below to share your thoughts about this tool.

How to Convert Images to WebP Format in Linux

One of the numerous best practices you will hear of, for optimizing your web-site performance is using compressed images. In this article, we will share with you a new image format called webp for creating compressed and quality images for the web.

WebP is a relatively new, open source image format that offers exceptional lossless and lossy compression for images on the web, designed by Google. To use it, you need to download pre-compiled utilities for Linux, Windows and Mac OS X.

With this modern image format, webmasters and web developers can create smaller, richer images that make the web faster.

How to Install WebP Tool in Linux

Thankfully, the webp package is present in the Ubuntu official repositories, you can install it using the APT package manager as shown.

$ sudo apt install webp 


On other Linux distributions, start by downloading the webp package from Googles repository using the wget command as follows.

$ wget -c https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-0.6.1-linux-x86-32.tar.gz

Now extract the archive file and move into the extracted package directory as follows.

$ tar -xvf libwebp-0.6.1-linux-x86-32.tar.gz $ cd libwebp-0.6.1-linux-x86-32/
$ cd bin/
$ ls
Webp Packages

Webp Packages

As you can see from the above screen shot, the package contains a precompiled library (libwebp) for adding webp encoding or decoding to your programs and various webp utilities listed below.

  • anim_diff – tool to display the difference between animation images.
  • anim_dump – tool to dump the difference between animation images.
  • cwebp – webp encoder tool.
  • dwebp – webp decoder tool.
  • gif2webp – tool for converting GIF images to webp.
  • img2webp – tools for converting a sequence of images into an animated webp file.
  • vwebp – webp file viewer.
  • webpinfo – used to view info about a webp image file.
  • webpmux – webp muxing tool.

To convert an image to webp, you can use the cwebp tool, where the -q switch defines the output quality and -o specifies the output file.

$ cwebp -q 60 Cute-Baby-Girl.png -o Cute-Baby-Girl.webp
OR
$ ./cwebp -q 60 Cute-Baby-Girl.png -o Cute-Baby-Girl.webp
Covert Image to WebP Format

Covert Image to WebP Format

You can view the converted webp image using the vwebp tool.

$ ./vwebp Cute-Baby-Girl.webp
View WebP Format Image

View WebP Format Image

You can see all options for any of the tools above by running them without any arguments or using the -longhelp flag, for example.

$ ./cwebp -longhelp

Last but not least, if you want to run the above programs without typing their absolute paths, add the directory ~/libwebp-0.6.1-linux-x86-32/bin to your PATH environmental variable in your ~/.bashrc file.

$ vi ~/.bashrc

Add the line below towards the end of the file.

export PATH=$PATH:~/libwebp-0.6.1-linux-x86-32/bin

Save the file and exit. Then open a new terminal window and you should be able to run all webp programs like any other system commands.

WebP Project Homepage: https://developers.google.com/speed/webp/

Also check out these useful related articles:

  1. 15 Useful ‘FFmpeg’ Commands for Video, Audio and Image Conversion in Linux
  2. Install ImageMagick (Image Manipulation) Tool on Linux
  3. 4 Ways to Batch Convert Your PNG to JPG and Vice-Versa

WebP is just one of the many products coming out of Google’s continuous efforts towards making the web faster. Remember to share you thoughts concerning this new image format for the web, via the feedback form below.

Fix “The plain HTTP request was sent to HTTPS port” Error in Nginx

In this article, we will show how to solve the “400 Bad Request: The plain HTTP request was sent to HTTPS port” in Nginx HTTP server. This error normally arises when you try to configure Nginx to handle both HTTP and HTTPS requests.

For the purpose of this guide, we are considering a scenario in which nginx is serving multiple websites implemented through server blocks (or virtual hosts in Apache) only one website uses SSL and the rest do not.

Read Also: The Ultimate Guide to Secure, Harden and Improve Performance of Nginx

We will also consider the sample SSL configuration below (we have changed the actual domain name for security reasons), which tells nginx to listen to both port 80 and 443. And all requests on HTTP should to be redirected to HTTPS by default.

Nginx Sample Configuration

server{
listen 80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
root /var/www/html/example.com/;
index index.php index.html index.htm;
#charset koi8-r;
access_log /var/log/nginx/example.com/example.com_access_log;
error_log /var/log/nginx/example.com/example.com_error_log error;
# SSL/TLS configs
ssl on;
ssl_certificate /etc/ssl/certs/example_com_cert_chain.crt;
ssl_certificate_key /etc/ssl/private/example_com.key;
include /etc/nginx/ssl.d/ssl.conf;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html/example.com/;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /var/www/html/example.com/;
fastcgi_pass 127.0.0.1:9001;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
include /etc/nginx/fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}


Using the above configuration, once a client tries to access your site via port 80 i.e http://example.com, the error in question will be displayed as in the following screen shot.

Nginx 404 Bad Request Error

Nginx 404 Bad Request Error

You encounter this error because every time a clien tries to access your site via HTTP, the request is redirected to HTTPS. It’s because the nginx expects SSL to be used in the transaction yet the original reques t(received via port 80) was plain HTTP, it complains with the error.

On the other hand, if a client uses https://example.com, they will not encounter the above error. In addition, if you have other websites configured not to use SSL, nginx will try to use HTTPS by default for them resulting to the above error.

To fix this error, comment out the line below in your configuration or set it to off.

#ssl on OR
ssl off

Save and close the file. Then restart the nginx service.

# systemctl restart nginx
OR
$ sudo systemctl restart nginx

This way, you can enable nginx to handle both HTTP and HTTPS requests for multiple server blocks.

Finally, below is a list of articles about setting up SSL HTTPS on common Linux distributions and FreeBSD.

  1. Setting Up HTTPS with Let’s Encrypt SSL Certificate For Nginx on RHEL/CentOS
  2. Secure Nginx with Free Let’s Encrypt SSL Certificate on Ubuntu and Debian
  3. How to Secure Nginx with SSL and Let’s Encrypt in FreeBSD

That’s all for now. If you know of any other way to solve this error, please let us know via the feedback form below.

Why I Find Nginx Practically Better Than Apache

According to the latest web server survey by Netcraft, which was carried out towards the end of 2017, (precisely in November), Apache and Nginx are the most widely used open source web servers on the Internet.

Apache is a free, open-source HTTP server for Unix-like operating systems and Windows. It was designed to be a secure, efficient and extensible server that provides HTTP services in sync with the prevailing HTTP standards.

Ever since it’s launch, Apache has been the most popular web server on the Internet since 1996. It is the de facto standard for Web servers in the Linux and open source ecosystem. New Linux users normally find it easier to set up and use.

Nginx (pronounced ‘Engine-x’) is a free, open-source, high-performance HTTP server, reverse proxy, and an IMAP/POP3 proxy server. Just like Apache, it also runs on Unix-like operating systems and Windows.


Well known for it’s high performance, stability, simple configuration, and low resource consumption, it has over the years become so popular and its usage on the Internet is heading for greater heights. It is now the web server of choice among experienced system administrators or web masters of top sites.

Some of the busy sites powered by:

  • Apache are: PayPal, BBC.com, BBC.co.uk, SSLLABS.com, Apple.com plus lots more.
  • Nginx are: Netflix, Udemy.com, Hulu, Pinterest, CloudFlare, WordPress.com, GitHub, SoundCloud and many others.

There are numerous resources already published on the web concerning the comparison between Apache and Nginx (i really mean ‘Apache Vs Nginx’ articles), many of which clearly explain into detail, their top features and operations under various scenarios including performance measures in lab benchmarks. Therefore that will not be addressed here.

I will simply share my experience and thoughts about the whole debate, having tried out Apache and Nginx, both in production environments based on requirements for hosting modern web applications, in the next section.

Reasons Why I Find Nginx Practically Better Than Apache

Following are reasons why I prefer Nginx web server over Apache for modern web content delivery:

1. Nginx is Lightweight

Nginx is one of light weight web servers out there. It has small footprints on a system compared to Apache which implements a vast scope of functionality necessary to run an application.

Because Nginx puts together a handful of core features, it relies on dedicated third?party upstream web servers such as an Apache backend, FastCGI, Memcached, SCGI, and uWSGI servers or application server, i.e language specific servers such as Node.js, Tomcat, etc.

Therefore its memory usage is far better suited for limited resource deployments, than Apache.

2. Nginx is Designed for High Concurrency

As opposed to Apache’s threaded- or process-oriented architecture (process?per?connection or thread?per?connection model), Nginx uses a scalable, event-driven (asynchronous) architecture. It employs a liable process model that is tailored to the available hardware resources.

It has a master process (which performs the privileged operations such as reading configuration and binding to ports) and which creates several worker and helper processes.

The worker processes can each handle thousands of HTTP connections simultaneously, read and write content to disk, and communicate with upstream servers. The helper processes (cache manager and cache loader) can manage on?disk content caching operations.

This makes its operations scalable, and resulting into high performance. This design approach further makes it fast, favorable for modern applications. In addition, third?party modules can be used to extend the native functionalities in Nginx.

3. Nginx is Easy to Configure

Nginx has a simple configuration file structure, making it super easy to configure. It consists of modules which are controlled by directives specified in the configuration file. In addition, directives are divided into block directives and simple directives.

A block directive is defined by braces ({ and }). If a block directive can have other directives inside braces, it is called a context such as events, http, server, and location.

http {
server {
}
}

A simple directive consists of the name and parameters separated by spaces and ends with a semicolon (;).

http {
server {
location / {
## this is simple directive called root
root /var/www/hmtl/example.com/;
}
}
}

You can include custom configuration files using the include directive, for example.

http {
server {
}
## examples of including additional config files
include /path/to/config/file/*.conf;
include /path/to/config/file/ssl.conf;
}

A practical example for me was how I managed to easily configure Nginx to run multiple websites with different PHP versions, which was a little of a challenge with Apache.

4. Nginx is an Excellent Frontend Proxy

One of the common uses of Nginx is setting it up as a proxy server, in this case it receives HTTP requests from clients and passes them to proxied or upstream servers that were mentioned above, over different protocols. You can also modify client request headers that are sent to the proxied server, and configure buffering of responses coming from the proxied servers.

Then it receives responses from the proxied servers and passes them to clients. It is mush easier to configure as a proxy server compared to Apache since the required modules are in most cases enabled by default.

5. Nginx is Remarkable for Serving Static Content

Static content or files are typically files stored on disk on the server computer, for example CSS files , JavaScripts files or images. Let’s consider a scenario where you using Nginx as a frontend for Nodejs (the application server).

Although Nodejs server (specifically Node frameworks) have built in features for static file handling, they don’t need to do some intensive processing to deliver non-dynamic content, therefore it is practically beneficial to configure the web server to serve static content directly to clients.

Nginx can perform a much better job of handling static files from a specific directory, and can prevent requests for static assets from choking upstream server processes. This significantly improves the overall performance of backend servers.

6. Nginx is an Efficient Load Balancer

To realize high performance and uptime for modern web applications may call for running multiple application instances on a single or distributed HTTP servers. This may in turn necessitate for setting up load balancing to distribute load between your HTTP servers.

Today, load balancing has become a widely used approach for optimizing operating system resource utilization, maximizing flexibility, cutting down latency, increasing throughput, achieving redundancy, and establishing fault-tolerant configurations – across multiple application instances.

Nginx uses the following load balancing methods:

  • round-robin (default method) – requests to the upstream servers are distributed in a round-robin fashion (in order of the list of servers in the upstream pool).
  • least-connected – here the next request is proxied to the server with the least number of active connections.
  • ip-hash – here a hash-function is used to determine what server should be selected for the next request (based on the client’s IP address).
  • Generic hash – under this method, the system administrator specifies a hash (or key) with the given text, variables of the request or runtime, or their combination. For example, the key may be a source IP and port, or URI. Nginx then distributes the load amongst the upstream servers by generating a hash for the current request and placing it against the upstream servers.
  • Least time (Nginx Plus) – assigns the next request to the upstream server with the least number of current connections but favors the servers with the lowest average response times.

7. Nginx is Highly Scalable

Furthermore, Nginx is highly scalable and modern web applications especially enterprise applications demand for technology that provides high performance and scalability.

One company benefiting from Nginx’s amazing scalability features is CloudFlare, it has managed to scale its web applications to handle more than 15 billion monthly page views with a relatively modest infrastructure, according to Matthew Prince, co-founder and CEO of CloudFare.

For a more comprehensive explanation, check out this article on the Nginx blog: NGINX vs. Apache: Our View of a Decade-Old Question.

Conclusion

Both Apache and Nginx can’t be replaced by each other, they have their strong and weak points. However, Nginx offers a powerful, flexible, scalable and secure technology for reliably and efficiently powering modern websites and web applications. What is your take? Let us know via the feedback form below.

TLP – Quickly Increase and Optimize Linux Laptop Battery Life

TLP is a free open source, feature-rich and command line tool for advanced power management, which helps to optimize battery life in laptops powered by Linux. It runs on every laptop brand, and ships in with a default configuration already tunned to effectively and reliably maintain battery life, so you can simply install and use it.

It performs power saving by allowing you to configure how devices such as CPU, disk, USBs, PCIs, radio devices should utilize power when your laptop is running on battery.

TLP Features:

  • It is highly configurable through various power saving parameters.
  • It uses automated background tasks.
  • Uses kernel laptop mode and dirty buffer timeouts.
  • Supports processor frequency scaling including “turbo boost” and “turbo core”.
  • Has a power aware process scheduler for multi-core/hyper-threading.
  • Provides for runtime power management for PCI(e) bus devices.
  • PCI Express active state power management (PCIe ASPM).
  • Supports radeon graphics power management (KMS and DPM).
  • Has a I/O scheduler (per disk).
  • Offers USB autosuspend with blacklist.
  • Supports Wifi power saving mode.
  • Also offers Audio power saving mode.
  • Offers hard disk advanced power management level and spin down timeout (per disk).
  • Also supports SATA aggressive link power management (ALPM) and so much more.

How to Install TLP Battery Management Tool in Linux

TLP package can be easily installed on Ubuntu as well as corresponding Linux Mint using TLP-PPA repository as shown.

$ sudo add-apt-repository ppa:linrunner/tlp
$ sudo apt-get update
$ sudo apt-get install tlp tlp-rdw 

On Debian add the following line to your /etc/apt/sources.list file and then update the system package cache and install it.

# echo "deb http://ftp.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
# apt-get update # apt-get install tlp tlp-rdw 


On Fedora, Arch Linux and OpenSuse, execute the following command as per your distribution.

# dnf install tlp tlp-rdw [On Fedora]
# pacman -S tlp tlp-rdw [On Arch Linux]
# zypper install tlp tlp-rdw [On OpenSUSE]

How to Use TLP to Optimize Battery Life in Linux

Once you have installed TLP, its configuration file is /etc/default/tlp and you will have the following commands to use:

  • tlp – apply laptop power saving settings
  • tlp-stat – displays all power saving settings
  • tlp-pcilist – displays PCI(e) device data
  • tlp-usblist – for viewing USB devices data

It should start automatically as a service, you can check if it is running under SystemD using systemctl command.

$ sudo systemctl status tlp

After the service starts running, you have to restart the system to actually start using it. But you can prevent this by manually applying the current laptop power saving settings with root privileges using the sudo command, like so.

$ sudo tlp start 

Afterwards, confirm that it is running using the following command, which actually shows system information and TLP status.

$ sudo tlp-stat -s 
Show System and TLP Information

Show System and TLP Information

Important: As we mentioned before, it uses automated background tasks but you will not see any TLP background process or daemon in ps command output.

To view current TLP configuration, run the following command with -c option.

$ sudo tlp-stat -c
Show TLP Configuration

Show TLP Configuration

To display all power settings run the following command.

$ sudo tlp-stat
Show Power Saving Settings

Show Power Saving Settings

To display Linux battery information, run the following command with -b switch.

$ sudo tlp-stat -b
Show Linux Battery Information

Show Linux Battery Information

To display Temperatures and Fan Speed of system, run the following command with -t switch.

$ sudo tlp-stat -t
Show CPU Temperature and Fan Speed

Show CPU Temperature and Fan Speed

To display Processor Data, run the following command with -p switch.

$ sudo tlp-stat -p
Show Processor Data

Show Processor Data

To display any Warnings, run the following command with -w switch.

$ sudo tlp-stat -w

Note: If your are using ThinkPad, there are certain specific packages you need to install for your distribution, that you can check from the TLP homepage. You will also find more information and a number of other usage commands there.

Read Also: PowerTop – Monitors Total Power Usage and Improve Linux Laptop Battery Life

TLP is a useful tool for all laptops powered by Linux operating systems. Give us your thought about it via the comment form below, and you can let us know of any other similar tools you have come across as well.

A school in India defies the traditional education model

Located in a sleepy village just two hours away from the bustling metropolis of Mumbai is a school that defies traditional educational models by collaboratively owning, building, and sharing knowledge and technology. The school uses only open source software and hardware in its approach to learning, and takes pride in the fact that none of its students have used or even seen proprietary software, including the ubiquitous Windows operating system.

The Tamarind Tree School, located in Dahanu Taluka, Maharashtra, India, is an experiment in open education. Open education is a philosophy about how people produce, share, and build on knowledge and technology, advocating a world in which education is for social good, and everyone has equal opportunity and access to education, training, and knowledge.

Why open education?

The school’s founders believe that the commodification and ownership of knowledge is the primary reason for the inequity in access to quality educational resources. While the Internet may have created a proliferation of digital content and learning tools, the relationship between technology creation, knowledge building, access, and ownership remains skewed for most people in society.

The trend toward expensive primary schools in India, copyrights on learning videos, academic journals, and software, “free” educational apps, and the manufacturing of laptops and devices support the idea that knowledge is owned and controlled by a few.

Many people confuse free usage with free access. But freedom such as ownership and collaboration among users is reduced or eliminated when learning communities do not feel empowered to build their own digital devices, set up their own networks, or create their own digital learning tools. As a result, many learners unknowingly become thieves (as seen in the rampant use of pirated software in India) or compromise their fundamental freedom to own and engage with the digital world on their terms. This reality is even more grim in rural India, where disadvantaged communities are denied access or equal opportunity to the digital world.

How do we create a world where everyone enjoys access to quality education? One approach is to fundamentally change the way knowledge and technology are owned and controlled.

The open source movement offers a solution.

Open education is based on the premise that knowledge should be collaboratively built and shared by all. It believes in creating producers and collaborators of knowledge rather than consumers of it.

How we implement open education

Based on these values and philosophies, the Tamarind Tree school has been experimenting with several open source options:

1. Single-board computers

The school has been able to avoid proprietary hardware, thanks to the work of organizations around the world that build single-board computers. A single-board computer (SBC) is a complete computer built on a single circuit board, complete with microprocessor(s), memory, input/output (I/O), and other required features.

The school selected a robust, affordable SBC built by the Raspberry Pi Foundation, and uses it to teach children programming skills and computational thinking. Students at Tamarind Tree enjoy coding and programming using the visual programming tool Scratch on these hardy open source machines.

2. Open source gamified software and open educational resources

The school, which uses only open educational resources (OERs), employs a combination of open digital tools like Gcompris, Tux Math, Tux Paint, Jfraction, and programs from the open source KDE Community to teach English, math, and science in a fun, interactive manner.

3. My Big Campus learning management system

To enable relevant, contextual learning, Tamarind Tree set up its own learning management system, which is hosted on the open source platform Moodle. Students as young as 7 years old can log on to their courses, along with a facilitator, and are guided to different online and offline activities. The system also supports individualized learning. The curriculum hosted at My Big Campus is derived from the National Council of Educational Research and Training in New Delhi. Students enjoy answering quizzes, commenting on images and blogs, creating digital art, and more. Courses are created contextually, grading can be done online, and students can learn at their own pace.

4. E-library

Tamarind Tree also has a facility where any student with a digital device can read books, articles, or news reports from a collection of more than 3,000 resources hosted on the school’s e-library server. The e-library, which is updated continuously, has been set up on the single-board computer and uses the Calibre open source library management system to organize, tag, and upload resources. All books hosted on the server are in the public domain or hold a Creative Commons license.

As students build knowledge by creating and playing their own computer games and participating in other educational activities, teachers can customize course materials to fit the needs of individual learners through digital content and local resources. The school’s goal is to establish that knowledge and technology can be entirely built, owned, and controlled by learning communities by using open source educational resources.

Is the future of education open?

Open education can help build a society that can provide free and open access to education and knowledge for all people with a desire to learn. The Tamarind Tree School demonstrates the potential of creating an educational model that believes in the democratization of knowledge.

An introduction to Eclipse MicroProfile

Enterprise Java has been defined by two players: Spring on one side and Java Enterprise Edition on the other. The Java EE set of specifications was developed in the Java Community Process under the stewardship of Oracle. The current Java EE 8 was released in September 2017; the prior version came out in 2013.

Between those releases, the industry saw a lot of change, most notably containers, the ubiquitous use of JSON, HTTP/2, and microservices architectures. Unfortunately there was not much related activity around Java EE; but users of the many Java EE-compliant servers demanded adoption of those new technologies and paradigms.

As a result, a group of vendors and community members founded MicroProfile to develop new specifications for using Java EE in microservice architectures that could be added into future versions of Java EE.

The first release of MicroProfile, in summer 2016, included three existing standards to serve as a baseline. At the end of 2016, MicroProfile joined the Eclipse Foundation (which explains Eclipse in the name) to leverage Eclipse’s strong governance and intellectual property expertise.

In 2017, there were two additional releases, and the next one is right around the corner. MicroProfile aims to release an update roughly every three months with specific content in a time-boxed way. Releases consist of a series of specifications, each developed at its own pace, and the umbrella release contains all of the specifications’ current versions.

What’s in the box?

Sweets for my sweet, sugar for my honey.

Well, luckily not, as too much sugar is bad for your health. But the individual specifications do have some pretty tasty content. Development of new specifications started after the first release.

The specifications that make up MicroProfile 1.2, which was released at JavaOne 2017, are:

  • Metrics: Deals with telemetry data and how it is exposed in a uniform way. This includes data from the underlying Java virtual machine as well as data from applications.
  • Health: Reports whether a service is healthy. This is important for schedulers like Kubernetes to determine if an application (container) should be killed and a new one started.
  • Config: Provides a uniform way of relaying configuration data into the application independent of the configuration source.
  • Fault tolerance: Includes mechanisms to make microservices resilient to failures in the network or other services they rely on, such as defining timeouts for calls to remote services, retrying policies in case of failure, and setting fallback methods.
  • JWT propagation: JSON Web Token (JWT) is a token-based authentication/authorization system that allows to authenticate, authorize, and verify identities based on a security token. JWT propagation defines the interoperability and container integration requirements for JWT for use with Java EE style role-based access control.

The just-released MicroProfile 1.3 includes updates to some of the above and adds the following new specifications:

  • OpenTracing: A mechanism for distributed tracing of calls across a series of microservices.
  • OpenAPI: A way to document data models and REST APIs so they can be read by machines and automatically build client code from this documentation. OpenAPI was derived from the Swagger specification.
  • REST client: A type-safe REST client that builds on the standard JAX-RS client to do more heavy lifting so consumer code can rely on strongly typed data and method invocations.

Upcoming releases are expected to pick up some APIs and new API versions from Java EE 8, such as JSON-B 1.0, JSON-P 1.1, CDI 2.0, and JAX-RS 2.1.

Where can I learn more?

How can I get involved?

The main communication channel is the MicroProfile discussion group. All specifications have a GitHub repository under the Eclipse organization, so they are using GitHub issues and pull requests. Also, each specification usually has a Gitter discussion group.

If you have an idea for a new MicroProfile specification, join the discussion group, present your idea, and hack away. Once others support your idea, a new repository will be created, and the more formal process can begin.

How to Get Domain and IP Address Information Using WHOIS Command

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Get Domain and IP Address Information Using WHOIS Command’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/01/Whois-Command-Examples.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to View Configuration Files Without Comments in Linux

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to View Configuration Files Without Comments in Linux’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/12/View-Files-Without-Comments-in-Linux.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Send a Message to Logged Users in Linux Terminal

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Send a Message to Logged Users in Linux Terminal’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/12/Send-Message-to-Linux-Users.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>