Feed aggregator
Development Release: AlmaLinux OS 10.0 Beta 1
Distribution Release: Archman Linux 20241207
Distribution Release: Window Maker Live 12.8
DistroWatch Weekly, Issue 1100
Review: Oreon 9.3 / Lime R2
News: IPFire unveils new appliance, Fedora Asahi shows off new video drivers, openSUSE Leap Micro updated, Redox OS for RISC-V running on Redox OS
Questions and answers: Differences in speeds
Released last week: FreeBSD 14.2, Nitrux e3ba3c69, EasyOS 6.5, Alpine....
What’s KernelCare?
This article explains all that you need to know about KernelCare. But before studying about KernelCare, let’s do a quick recap of the Linux kernel. It’ll help you understand KernelCare better. The Linux kernel is the core part of Linux OS. It resides in memory and prompts the CPU what to do.
Now let’s begin with today’s topic which is KernelCare. And if you’re a system administrator this article is going to present valuable information for you.
What is KernelCare?So, what’s KernelCare? KernelCare is a patching service that offers live security updates for Linux kernels, shared libraries, and embedded devices. It patches security vulnerabilities inside the Linux kernel without creating service interruptions or any downtime. Once you install KernelCare on the server, security updates automatically get applied every 4 hours on your server. It dismisses the need for rebooting your server after making updates.
It is a commercial product and is licensed under GNU GPL version 2. Cloud Linux, Inc developed this product. The first beta version of KernelCare was released in March 2014 and its commercial launch was in May 2014. Since then they have added various useful integrations for automation tools, vulnerability scanners, and others.
Operating systems supported by KernelCare include CentOS/RHEL 5, 6, 7; Cloud Linux 5, 6; OpenVZ, PCS, Virtuozzo, Debian 6, 7; and Ubuntu 14.04.
Is KernelCare Important?Are you wondering if KernelCare is important for you or not? Find out here. By installing the latest kernel security patches, you are able to minimize potential risks. When you try to update the Linux kernel manually, it may take hours. Apart from the server downtime, it can be a stressful job for the system admins and also for the clients.
Once the kernel updates are applied, the server needs a reboot. This is usually done during off-peak work hours. And this causes some additional stress. However, ignoring server reboots can cause a whole lot of security issues. It’s seen that, even after rebooting, the server experiences issues and doesn’t easily come back up. Fixing such issues is a trouble for the system admins. Often the system admin needs to roll back all the applied updates to get the server up quickly.
With KernelCare, you can avoid such issues.
How Does KernelCare Work?KernelCare eliminates non-compliance and service interruptions caused by system reboots. KernelCare agent resides on your server. It periodically checks for new updates. In case it finds any, the agent downloads those and applies them to the running kernel. A KernelCare patch can be defined as a piece of code that’s used to substitute buggy code in the kernel.
Go to Full ArticleGetting Started with Docker Semi-Self-Hosting on Linode
With the evolution of technology, we find ourselves needing to be even more vigilant with our online security every day. Our browsing and shopping behaviors are also being continuously tracked online via tracking cookies being dropped on our browsers that we allow by clicking the “I Accept” button next to deliberately long agreements on websites before we can get the full benefit of said site.
Watch this article:
Additionally, hackers are always looking for a target and it's common for even big companies to have their servers compromised in any number of ways and have sensitive data leaked, often to the highest bidder.
These are just some of the reasons that I started looking into self-hosting as much of my own data as I could.
Because not everyone has the option to self-host on their own, private hardware, whether it's for lack of hardware, or because their ISP makes it difficult or impossible to do so, I want to show you what I believe to be the next best step, and that's a semi-self-hosted solution on Linode.
Let's jump right in!
Setting up a LinodeFirst things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.
Get logged into your Linode account click on "Create Linode".
Don't have a Linode account? Get $100 in credit clicking here
On the "Create" page, click on the "Marketplace" tab and scroll down to the "Docker" option. Click it.
With Docker selected, scroll down and close the "Advanced Options" as we won't be using them.
Below that, we'll select the most recent version of Debian (version 10 at the time of writing).
In order to get the the lowest latency for your setup, select a Region nearest you.
When we get to the "Linode Plan" area, find an option that fits your budget. You can always start with a small plan and upgrade later as your needs grow.
Next, enter a "Linode Label" as an identifier for you. You can enter tags if you want.
Enter a Root Password and import an SSH key if you have one. If you don't that's fine, you don't need to use an SSH key. If you'd like to generate one and use it, you can find more information about how to do so here "Creating an SSH Key Pair and Configuring Public Key Authentication on a Server").
Go to Full ArticleManage Java versions with SDKMan
Java is more than just a programming language: It's also a runtime.
Applications written in Java are compiled to Java bytecode then interpreted by a Java Virtual Machine (JVM), which is why you can write Java on one platform and have it run on all other platforms.
A challenge can arise, however, when a programming language and an application develop at different rates. It's possible for Java (the language) to increment its version number at the same time your favorite application continues to use an older version, at least for a while.
If you have two must-have applications, each of which uses a different version of Java, you may want to install both an old version and a new version of Java on the same system. If you're a Java developer, this is particularly common, because you might contribute code to several projects, each of which requires a different version of Java.
The SDKMan project makes it easy to manage different versions of Java and related languages, including Groovy, Scala, Kotlin, and more.
SDKMan is like a package manager just for versions of Java.
More on Java What is enterprise Java programming? Red Hat build of OpenJDK Java cheat sheet Free online course: Developing cloud-native applications with microservices arc… Fresh Java articles Install SDKManSDKMan requires these commands to be present on your system:
- zip
- unzip
- curl
- sed
On Linux, you can install these using your package manager. On Fedora, CentOS Stream, Mageia, and similar:
$ sudo dnf install zip unzip curl sedOn Debian-based distributions, use apt instead of dnf. On macOS, use MacPorts or Homebrew. On Windows, you can use SDKMan through Cygwin or WSL.
Once you've satisfied those requirements, download the SDKMan install script:
$ curl "https://get.sdkman.io" --output sdkman.shTake a look at the script to see what it does, and then make it executable and run it:
$ chmod +x sdkman.sh$ ./sdkman.shConfigure
When the installation has finished, open a new terminal, or run the following in the existing one:
source "~/.sdkman/bin/sdkman-init.sh"Confirm that it's installed:
$ sdk versionInstall Java with SDKManNow when you want to install a version of Java, you can do it using SDKMan.
First, list the candidates for Java available:
$ sdk list java=================================================
Available Java Versions for Linux 64bit
=================================================
Vendor | Version | Dist | Identifier
-------------------------------------------------
Gluon | 22.0.0.3.r17 | gln | 22.0.0.3.r17-gln
| 22.0.0.3.r11 | gln | 22.0.0.3.r11-gln
GraalVM | 22.0.0.2.r17 | grl | 22.0.0.2.r17-grl
| 21.3.1.r17 | grl | 21.3.1.r17-grl
| 20.3.5.r11 | grl | 20.3.5.r11-grl
| 19.3.6.r11 | grl | 19.3.6.r11-grl
Java.net | 19.ea.10 | open | 19.ea.10-open
| 18 | open | 18-open
| 17.0.2 | open | 17.0.2-open
| 11.0.12 | open | 11.0.12-open
| 8.0.302 | open | 8.0.302-open
[...]
This provides a list of different Java distributions available across several popular vendors, including Gluon, GraalVM, OpenJDK from Java.net, and many others.
You can install a specific version of Java using the value in the Identifier column:
$ sdk install java 11.0.12-openThe sdk command uses tabbed completion, so you don't need to view a list. Instead you can type sdk install java 11 and then press Tab a few times to get the options.
Alternately, you can just install the default latest version:
$ sdk install javaSet your current version of JavaSet the version of Java for a terminal session with the use subcommand:
$ sdk use java 17.0.2-openTo set a version as default, use the default subcommand:
$ sdk default java 17.0.2-openGet the current version in effect using the current subcommand:
$ sdk current java Using java version 17.0.2-openRemoving Java with SDKManYou can remove an installed version of Java using the uninstall subcommand:
$ sdk uninstall java 11.0.12-openMore SDKManYou can do more customization with SDKMan, including updating and upgrading Java versions and creating project-based environments. It's a useful command for any developer or user who wants the ability to switch between versions of Java quickly and easily.
If you love Java, or use Java, give SDKMan a try. It makes Java easier than ever!
The SDKMan project makes it easy to manage different versions of Java and related languages, including Groovy, Scala, Kotlin, and more.
Image by:Image by WOCinTech Chat, CC BY 2.0
Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 6482 points (Correspondent) Vancouver, CanadaSeldom without a computer of some sort since graduating from the University of British Columbia in 1978, I have been a full-time Linux user since 2005, a full-time Solaris and SunOS user from 1986 through 2005, and UNIX System V user before that.
On the technical side of things, I have spent a great deal of my career as a consultant, doing data analysis and visualization; especially spatial data analysis. I have a substantial amount of related programming experience, using C, awk, Java, Python, PostgreSQL, PostGIS and lately Groovy. I'm looking at Julia with great interest. I have also built a few desktop and web-based applications, primarily in Java and lately in Grails with lots of JavaScript on the front end and PostgreSQL as my database of choice.
Aside from that, I spend a considerable amount of time writing proposals, technical reports and - of course - stuff on https://www.opensource.com.
Open Sourcerer People's Choice Award 100+ Contributions Club Emerging Contributor Award 2016 Correspondent Columnist Contributor Club Author Comment Gardener Register or Login to post a comment.Open exchange, open doors, open minds: A recipe for global progress
Could open organization principles successfully apply to entire societies?
That's the question I asked as I read the book Open: The Story of Human Progress by Johan Norberg, which aims to examine the relative success of "open societies" throughout global history.
Learn about open organizations Download resources Join the community What is an open organization? How open is your organization?In this review—the first article in an extended discussion of the work from Open Organization community members—I will summarize more precisely what Norberg means when he uses the term "open" and offer an initial assessment of his arguments. Ultimately, however, our discussion will explore more expansive themes, like:
- the importance of open societies,
- what the future could (or should) look like in a more open world, and
- how these principles impact our collective understanding of how organizations operate in service of "the greater good"
Essentially, Norberg is looking at four dimensions of "open," which he calls:
- "open exchange" (global goods and service flows across borders),
- "open doors" (global movement of people),
- "open minds" (global receptivity to new and different ideas), and
- "open societies" (how cultures should be governed to benefit from the above three)
Let me discuss each one more extensively.
Open exchangeNorberg uses the phrase "open exchange" to refer to the movement of goods and services not just across borders but within them as well. Simply put, he believes that people across the world prosper when trade increases, because increased trade leads to increased cooperation and sharing.
His argument goes like this: when a nation (and to be sure, Norberg aims his advice at contemporary nation-states) allows and includes foreign goods into their market, in general they also gain expertise, skills, and knowledge, too. Surplus goods/services that one may have should be sold anywhere they might provide value and add benefit for someone else—and those benefits might include, for example, favors, ideas, knowledge, not just goods and services themselves. Reciprocity and relatively equal exchange is for Norberg an unavoidable aspect of human nature, as it builds binding relationships that promote more generosity. Generosity in turn promotes more trade, creating a cycle of prosperity for all involved.
This view holds for organizations working with uncommon trade partners as well. Greater organizational specificity leads to the need for more cooperation and sharing, which leads to even more specialization. So here we can see a link between open societies and open organizations regarding trade issues.
Open doorsFor Norberg, "open doors" refers to people's ability to move across national borders, for one reason or another. He believes the gradual inclusion of foreigners into a society leads to more novel and productive interactions, which leads to greater innovation, more ideas, and more rapid discoveries. For a society to be productive, it must get the right talent performing the right tasks. Norberg argues that there should be no barriers to that match-up, and people should be mobile, even across borders, so they can achieve it.
Norberg outlines how, throughout history, diverse groups of people solve problems more effectively—even if they create more friction as they do so, as members have their assumptions questioned. This kind of open environment must be promoted, supported, and managed, however, in order to avoid groupthink, the predominance of voices that are merely the loudest, and the outsized influence of niche interests.
Critical to the success of "open doors" are recognition, respect, understanding, acceptance, and inclusivity toward others. Norberg discusses the importance of these qualities, citing the World Values Survey, which measures some of them. Done well, open doors can allow societies to cross-fertilize, borrowing ideas and technology from each other and multiplying that which works best.
We could say that's equally true for an organization wanting to develop a new product or market, too.
Open minds"Open economies stimulate open-mindedness," Norberg writes. For him, "open minds" are those receptive to thoughts and belief systems that may seem different, foreign, or alien to them—those that both offer and receive different perspectives. Open minds, Norberg claims, lead to more rapid progress.
Open minds flourish when given the space to encounter new ideas and explore them freely—rather than, say, simply accept the given dogma of an age. According to Norberg, people from a wide range of disciplines, specialties, and skills coming together and sharing their perspectives stimulates growth and progress. But this is only possible when they exist in an environment where they feel free to question the status quo and possibly overturn long-standing beliefs. Barriers to creating those environments certainly exist (in fact, the entire second half of Norberg's book offers a deeper analysis of them).
Open minds flourish when given the space to encounter new ideas and explore them freely—rather than, say, simply accept the given dogma of an age.Of course this is true in organizations as well. The more people (and the more different people) who look at a problem, the better. This not only leads to faster solutions but helps overcome anyone's individual biases. Serendipitous solutions to problems can seemingly come out of nowhere more often, as there will be better and more peer review of strongly held positions. And yet differences create friction, so standards of protocol and behavior are required to ensure progress.
For Norberg, the world benefits when scientists, philosophers, industrialists, and craftspeople can influence one another's thinking (and are receptive to having their thinking changed!). The same is true in open organizations when people with different roles and functions can work together and enrich one another's thinking. More experiments and greater collaboration among disciplines lead to richer discoveries.
Open Organization resources Download resources Join the community What is an open organization? How open is your organization? Open societiesCombining open minds, open exchange, and open doors can lead to fully open societies globally, Norberg argues, and "the result is discoveries and achievements." Governments, he asserts, should work to foster those kinds of societies across the globe. In this way, societies can tap into the greatest talent from the entire global community.
According to Norberg, more inclusive societies based on these open policies can lead to material gains for people—fewer hours working, the ability to launch careers earlier (or retire earlier), longer lives in general, and more. This is not to mention reductions in extreme poverty, child and maternal mortality, and illiteracy globally. On top of that, for Norberg global cultural collaboration leads to better utilization of ecological, natural, and environmental resources. All this can be achieved through specific specialties that advance societies at an exponential rate though openness.
Open makes a historical argument. Norberg believes that throughout the ages it was not defenders of tradition that prospered most. Instead, those thinkers, engineers, and philosophers that challenged the status quo made the greatest contribution to global prosperity. Those figures benefitted from societies that were more open to improvements because they governed their own experiments, fostered rapid feedback loops, and built systems that quickly self-correct during setbacks.
Yet like any history, Norberg's is partial and selective, presenting isolated cases and examples. And some of those include even the most brutal empires, whose violence Norberg tends to overlook. In future parts of this review, we'll dive more deeply into various aspects of Norberg's analysis—and discuss its implications for thinking about a more open future.
Openness, this new book argues, has always been a necessary cornerstone of human civilization.
Image by:Opensource.com
The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.22 Raspberry Pi projects to try in 2022
The possibilities for Raspberry Pi projects continue to perpetuate this Pi Day! The beloved single-board computer recently turned ten years old. To celebrate, we put together a list of recent Raspberry Pi tutorials written by members of the Opensource.com community.
More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi 10 Raspberry Pi projects for your homeThe Raspberry Pi is ripe for DIY projects for the home. Why risk your data with a proprietary home automation tool when you can take full control with a $35 computer? Opensource.com authors have shared how they've built thermostats, monitored their home climate, set parental controls, and much more in the following tutorials.
- Build a home thermostat with a Raspberry Pi The ThermOS project is an answer to the many downsides of off-the-shelf smart thermostats.
- Monitor your home's temperature and humidity with Raspberry Pis and Prometheus Instrument a Prometheus application with Python on Raspberry Pis to collect temperature sensor data.
- Set up temperature sensors in your home with a Raspberry Pi Find out how hot your house is with a simple home Internet of Things project.
- Build a router with mobile connectivity using Raspberry Pi Use OpenWRT to get more control over your network's router.
- Troubleshoot WiFi problems with Go and a Raspberry Pi Build a WiFi scanner for fun.
- Set up network parental controls on a Raspberry Pi With minimal investment of time and money, you can keep your kids safe online.
- Monitor your greenhouse with CircuitPython and open source tools Keep track of your greenhouse's temperature, humidity, and ambient light using a microcontroller, sensors, Python, and MQTT.
- Collect sensor data with your Raspberry Pi and open source tools Learning more about what is going on in your home is not just useful; it's fun!
- Measure your Internet of Things with Raspberry Pi and open source tools Setting up an environment-monitoring system demonstrates how to use open source tools to keep tabs on temperature, humidity, and more.
- Track your family calendar with a Raspberry Pi and a low-power display Help everyone keep up with your family's schedule using open source tools and an E Ink display.
You can be productive without a ton of fancy tools. Whether you want to host your personal blog or start crypto trading with a reduced carbon footprint, the Raspberry Pi has you covered.
- Host your website with dynamic content and a database on a Raspberry Pi You can use free software to support a web application on a very lightweight computer.
- Use your Raspberry Pi as a productivity powerhouse The Raspberry Pi has come a long way from being primarily for hacking and hobbyists to a solid choice for a small productive workstation.
- Run your blog on a Raspberry Pi I set up a Raspberry Pi to act as a web server to host my personal blog on Drupal.
- Use your Raspberry Pi as a data logger Here's how to log the CPU temperature of a Raspberry Pi and create a spreadsheet-based report on demand.
- Convert your Raspberry Pi into a trading bot with Pythonic Reduce your power consumption by setting up your cryptocurrency trading bot on a Raspberry Pi.
The Raspberry Pi is probably most famous for its serious use case of fun! The Pi offers lots of options for tinkering with Linux, learning about computers, or celebrating your favorite holiday.
- Create a countdown clock with a Raspberry Pi Start counting down the days to your next holiday with a Raspberry Pi and an ePaper display.
- Track aircraft with a Raspberry Pi Explore the open skies with a Raspberry Pi, an inexpensive radio, and open source software.
- Control your Raspberry Pi remotely with your smartphone Control the GPIOs of your Raspberry Pi remotely with your smartphone.
- Build a programmable light display on Raspberry Pi Celebrate the holidays or any special occasion with a DIY light display using a Raspberry Pi, Python, and programmable LED lights.
- Make an automated Jack-o'-lantern with a Raspberry Pi Here's my recipe for the perfect pumpkin Pi.
- Cast your Android device with a Raspberry Pi Use Scrcpy to turn your phone screen into an app running alongside your applications on a Raspberry Pi or any other Linux-based device.
- Learn everything about computers with this Raspberry Pi kit The CrowPi is an amazing Raspberry Pi project system housed in a laptop-like body.
Go ahead and mark your calendar for trying out a few of these creative Raspberry Pi projects this year.
Celebrate Pi Day by checking out these creative and useful Raspberry Pi projects.
Image by:Dwight Sipler on Flickr
Raspberry Pi What to read next Build a router with mobile connectivity using Raspberry Pi How I run my blog on a Raspberry Pi Control your Raspberry Pi remotely with your smartphone This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.Collect sudo session recordings with the Raspberry Pi
I've used the sudo command for years, and one of my favorite features is how it saves a record of everything happening in a terminal while running a command. This feature has been available for over a decade. However, sudo 1.9 introduced central session recording collection, allowing you to check all administrative access to your hosts on your network at a single location and play back sessions like a movie.
I use this feature on my Raspberry Pi, and I recommend it to other Pi users. Even if you fully trust your users, logs and session recordings can help debug what happened on a given host if it acts strangely: Oops, wrong file deleted in /etc.
More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Why sudo?Sudo gives administrative access to users. Unless you limit access to a short list of commands, you practically provide full access to your hosts. The pi user can use sudo without even entering a password on the Raspberry Pi OS. On other operating systems, the default configuration grants members of the wheel group full administrative access.
Before you beginThe new sudo_logsrvd application handles collection. Earlier versions of the Raspberry Pi OS only had sudo version 1.8. The latest version is based on Debian 11 and includes sudo version 1.9.5. You also need a second host with sudo 1.9, which sends recordings to sudo_logsrvd.
Configuring sudo_logsrvdFor a production environment, I recommend using TLS encrypted connections between sudo and sudo_logsrvd. However, to simply understand how session recording works, you can go without encryption. This also means that there is nothing to configure other than creating the storage directory and starting sudo_logsrvd:
$ sudo mkdir /var/log/sudo-io$ sudo chmod 700 /var/log/sudo-io
$ sudo sudo_logsrvd
The sudo_logsrvd is now waiting for connections.
Configuring sudoConfigure sudo 1.9 on a host using visudo and append the following lines to the sudoers file. You will need to replace the IP address with the one of your Raspberry Pi. Note that if you do not have a second machine with sudo 1.9, you can use the same Raspberry Pi running sudo_logsrvd for testing.
Defaults ignore_iolog_errorsDefaults log_servers = 172.16.167.129:30343
Defaults log_output
The first line is your escape route while experimenting with sudo_logsrvd: It ensures that sudo works even if sudo_logsrvd is inaccessible. This configuration is not recommended for production environments as users can execute commands without proper recording.
The next two lines configure where to send recordings and enable recordings.
TestingFor testing, do something that you cannot figure out from sudo logs in syslog: A shell session. Be aware that sudo 1.9.8 changes this, but it is not yet available in Linux distributions. In this case, the logs show only that a shell is started, but nothing about what happened inside:
$ sudo -s# id
uid=0(root) gid=0(root) groups=0(root),117(lpadmin)
# cd /root/
# ls -la
total 36
drwx------ 5 root root 4096 Feb 16 12:27 .
drwxr-xr-x 18 root root 4096 Jan 28 04:22 ..
-rw------- 1 root root 827 Feb 16 12:49 .bash_history
-rw-r--r-- 1 root root 571 Apr 10 2021 .bashrc
drwx------ 3 root root 4096 Feb 16 10:54 .cache
-rw------- 1 root root 41 Feb 16 11:12 .lesshst
drwxr-xr-x 3 root root 4096 Feb 16 12:27 .local
-rw-r--r-- 1 root root 161 Jul 9 2019 .profile
drwx------ 3 root root 4096 Jan 28 04:21 .vnc
# exit
$
Even if the logs do not show anything useful, you can still use the sudoreplay command to list and playback recordings:
$ sudo sudoreplay -lFeb 16 12:37:54 2022 : pi : TTY=/dev/pts/1 ; CWD=/home/pi ; USER=root ; HOST=raspberrypi ; TSID=000001 ; COMMAND=/usr/bin/ls -l /etc/ssl/private/
Feb 16 12:38:14 2022 : pi : TTY=/dev/pts/1 ; CWD=/home/pi ; USER=root ; HOST=raspberrypi ; TSID=000002 ; COMMAND=/usr/bin/ls -la /etc/ssl/private/
Feb 16 12:49:21 2022 : pi : TTY=/dev/pts/1 ; CWD=/home/pi ; USER=root ; HOST=raspberrypi ; TSID=000003 ; COMMAND=/bin/bash
Feb 16 12:50:03 2022 : pi : TTY=/dev/pts/1 ; CWD=/home/pi ; USER=root ; HOST=raspberrypi ; TSID=000004 ; COMMAND=/bin/bash
Feb 16 12:50:28 2022 : pi : TTY=/dev/pts/1 ; CWD=/home/pi ; USER=root ; HOST=raspberrypi ; TSID=000005 ; COMMAND=/usr/bin/sudoreplay -l
$ sudo sudoreplay 000004
Replaying sudo session: /bin/bash
# id
uid=0(root) gid=0(root) groups=0(root),117(lpadmin)
# cd /root/
# ls -la
total 36
drwx------ 5 root root 4096 Feb 16 12:27 .
drwxr-xr-x 18 root root 4096 Jan 28 04:22 ..
-rw------- 1 root root 827 Feb 16 12:49 .bash_history
-rw-r--r-- 1 root root 571 Apr 10 2021 .bashrc
drwx------ 3 root root 4096 Feb 16 10:54 .cache
-rw------- 1 root root 41 Feb 16 11:12 .lesshst
drwxr-xr-x 3 root root 4096 Feb 16 12:27 .local
-rw-r--r-- 1 root root 161 Jul 9 2019 .profile
drwx------ 3 root root 4096 Jan 28 04:21 .vnc
# exit
$What is next?
I hope you learned something new today and will try it on your own Raspberry Pi. The setup I described here is good enough for testing. For production use, I recommend creating a startup script for sudo_logsrvd, which is missing from the Debian package, and you should use TLS between sudo and sudo_logsrvd. You can learn more about configuring TLS encryption from the documentation or my blog. The nice thing is that you can also use sudo_logsrvd on the Raspberry Pi in production in your home or small office. Unless you have dozens of sudo clients all utilizing the terminal heavily (like ls -laR /), not even the SD card of the Pi is a bottleneck.
Logs and session recordings can help debug what happened on a given host if it acts strangely. Try this setup on your Raspberry Pi.
Raspberry Pi Sysadmin What to read next 22 Raspberry Pi projects to try in 2022 This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.Use your Raspberry Pi as a data logger
Data logging can be done for various reasons. In a previous article, I wrote about how I monitor the electricity consumption of my household. The Raspberry Pi platform is a perfect match for such applications as it allows communication with many kinds of analog and digital sensors. This article shows how to log the CPU temperature of a Raspberry Pi and create a spreadsheet-based report on demand. Logging the CPU temperature won't require any additional boards or sensors.
Even without a Raspberry Pi, you can follow the steps described here if you replace the specific parts of the code.
More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi SetupThe code is based on Pythonic, a graphical Python programming fronted. The easiest way to get started with Pythonic is to download and flash the Raspberry Pi image. If you don't have a Raspberry Pi, use one of the other installation methods mentioned on the GitHub page (e.g., Docker or Pip).
Once installed, connect the Raspberry Pi to the local network. Next, open the web-based GUI in a browser by navigating to http://pythonicrpi:7000/.
You should now see the following screen:
Download and unzip the example available on GitHub. The archive consists of several file types.
Use the green-marked button to upload the current_config.json, with the yellow-marked button upload the XLSX file and the remaining *.py files.
You should have this configuration in front of you after you upload the files:
Implementation
The application can be separated into two logical parts: Logging and report generation. Both parts run independently from each other.
LoggingThe top part of the configuration can be summarized as the logging setup:
Involved elements:
- ManualScheduler - 0x0412dbdc: Triggers connected elements on startup (or manually).
- CreateTable - 0x6ce104a4: Assembles an SQL query which creates the working table (if not already existent).
- Scheduler - 0x557616c2: Triggers subsequent element every 5 seconds.
- DataAcquisition - 0x0e7b8360: Here we collect the CPU temperature and assemble an SQL query.
- SQLite - 0x196f9a6e: Represents an SQLite database, accepts the SQL queries.
I will take a closer look at DataAcquisition - 0x0e7b8360. Open the built-in web editor (code-server) by navigating to http://pythonicrpi:8000/. You can see all the element-related *.py files in the left pane. The DataAcquisition element is based on the type Generic Pipe. Open the file with the related id:
generic_pipe_0x0e7b8360.py
In this element, responsible for reading the CPU temperature, you can uncomment the lines of code depending on whether you're running this on a Raspberry Pi or not.
The above code produces an SQL query that inserts a row in the table my_table containing the Unix timestamp in seconds and the actual CPU temperature (or a random number). The code is triggered every five seconds by the previous element (Scheduler - 0x557616c2). The SQL query string is forwarded to the connected SQLite - 0x196f9a6e element, which applies the query to the related SQLite database on the file system. The process logs the CPU temperature in the database with a sampling rate of 1/5 samples per second.
Report generationThe bottom network generates a report on request:
Involved elements:
- ManualScheduler - 0x7c840ba9: Activates the connected Telegram bot on startup (or manually).
- Telegram - 0x2e4148e2: Telegram bot which serves an interface for requesting and providing of reports.
- GenericPipe- 0x2f78d74c: Assembles an SQL query comprising the data of the report.
- SQLite - 0x5617d487:
- ReportGenerator- 0x13ad992a: Create a XLSX-based report based on the data.
The example code contains a spreadsheet template (report_template.xlsx) which also belongs to this configuration.
Note: To get the Telegram bot running, provide a Telegram bot token to communicate with the server. core.telegram.org describes the process of creating a bot token.
The Telegram element outputs a request as a Python string when a user requests a report. The GenericPipe- 0x2f78d74c element that receives the request assembles a SQL query which is forwarded to the SQLite - 0x5617d487 element. The actual data, which is read based on the SQL query, is now sent to the ReportGenerator- 0x13ad992a, which I will take a closer look at:
generic_pipe_13ad992a.py
def execute(self):path = Path.home() / 'Pythonic' / 'executables' / 'report_template.xlsx'
try:
wb = load_workbook(path)
except FileNotFoundError as e:
recordDone = Record(PythonicError(e), 'Template not found')
self.return_queue.put(recordDone)
con.close()
return
except Exception as e:
recordDone = Record(PythonicError(e), 'Open log for details')
self.return_queue.put(recordDone)
con.close()
return
datasheet = wb['Data']
# create an iterator over the rows in the datasheet
rows = datasheet.iter_rows(min_row=2, max_row=999, min_col=0, max_col=2)
In the first part, I use the load_workbook() of the openpyxl library to load the spreadsheet template. If successfully loaded, I acquire a reference to the actual sheet in the datasheet variable. Afterward, I create an iterator over the rows in the datasheet, which is stored in the variable rows.
# Convert unix time [s] back into a datetime object, returns an iteratorreportdata_dt = map(lambda rec: (datetime.datetime.fromtimestamp(rec[0]), rec[1]), self.inputData)
# iterate till the first iterator is exhausted
for (dt, val), (row_dt, row_val) in zip(reportdata_dt, rows):
row_dt.value = dt
row_val.value = val
reportDate = datetime.datetime.now().strftime('%d_%b_%Y_%H_%M_%S')
filename = 'report_{}.xlsx'.format(reportDate)
filepath = Path.home() / 'Pythonic' / 'log' / filename
wb.save(filepath)
wb.close()
recordDone = Record(filepath, 'Report saved under: {}'.format(filename))
self.return_queue.put(recordDone)
The last part starts with the variable reportdata_dt: The variable holds an iterator which, when used, converts the raw Unix timestamp of the input data from the SQLite database (self.inputdata) back to a Python datetime object. Next, I zip the reportdata_dt iterator with the previously created rows iterator and iterate till the first of them is exhausted, which should be reportdata_dt. During iteration, I fill the columns of each row with the timestamp and the value. In the last step, I save the spreadsheet with a filename consisting of the actual date and time and forward the filename to the Telegram - 0x2e4148e2 element.
The Telegram - 0x2e4148e2 then loads the file from disk back into memory and sends it to the user who requested the report. This video shows the whole procedure:
Remote video URL
The report the user receives look like this:
Wrap up
This article shows how to easily convert the Raspberry Pi into a data logger. The Raspberry Pi platform allows you to interact with sensors of any kind, enabling you to monitor physical values as well as computed values. Using spreadsheets as the basis for your reports gives you a lot of flexibility and makes those reports very customizable. The openpyxl library in combination with Pythonic makes it simple to automate this process.
Here's how to log the CPU temperature of a Raspberry Pi and create a spreadsheet-based report on demand.
Image by:Opensource.com
Raspberry Pi What to read next 22 Raspberry Pi projects to try in 2022 How I run my blog on a Raspberry Pi Collect sudo session recordings with the Raspberry Pi This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.Understanding the Digital World: My honest book review
I read a lot of books. I especially like to read books about computers, Linux, and the digital world we live in. I also enjoy reading books on the history of computing about and by and the people who helped make this digital world what it is today.
Imagine my excitement when I discovered the new second edition of an important book by Brian W. Kernighan, one of the leading figures in the creation of Unix, author or co-author of many influential books, and a professor of Computer Science at Princeton University. Understanding the Digital World combines computer history, technology, and personal story, along with discussions about how today's technology impacts our privacy.
More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articlesKernighan teaches a course at Princeton each year, "Computers in Our World," intended for computer users who are not Computer Science majors. He wrote this book to bring much of the information contained in that course to the world at large.
Kernighan starts with an exploration of the technology itself. The title of Chapter 1 is, "What is a Computer?" Covering the CPU and how it works, he describes various forms of storage, including RAM, cache, disk, and other types of secondary storage, and how they all work together. After this overview of the hardware, he describes algorithms, how they are used to solve problems, and how they get incorporated into computer programs. In later chapters, Kernighan discusses the internet, the TCP/IP protocols that drive it, and some of the tools used to communicate using the internet.
He looks at the data about ourselves (stored on our computers) that gets transmitted across the internet—with or without our permission. Although there are references to security throughout the book, Kernighan spends a great deal of these latter chapters discussing the many ways in which our data is vulnerable and ways to implement at least some level of protection.
The parts that scared me most were the discussions about how organizations can track our movements on the internet—the effects of this (and tools such as data mining) on our online experiences. I am familiar with using tools like firewalls and strategies such as using good passwords and deleting or deactivating programs and daemons that I am not using. But the ease with which we can get spied upon (there is no more accurate word for it) is appalling no matter what actions we may take.
My first inclination after reading this book was to send it to the two of my grandkids that I am helping to build gaming computers. This book is a good way for them to learn how computers work at a level they can understand. They can also learn about the pitfalls (beyond those their parents have discussed with them) about how to be safe on the internet. I also suggested to their parents that they read it, too.
It is not all gloom and doom. Far from it. Kernighan manages to scare me while simultaneously ensuring that readers understand how to mitigate the threats he discusses. In the vast majority of his scenarios, I had already implemented many of the protections he covers.
This book has made me think more closely about how I work and play on the internet, the methods I use to protect my home network, and how I use my portable devices. Kernighan's level of paranoia is sufficient to ensure that readers pay attention while reassuring us that we can still use the internet, our computers, and other devices with a reasonable amount of safety so long as we take the appropriate precautions.
No! I am not going to tell you all of that. You'll get no spoilers from me.
Kernighan indicates to readers the sections that may get too technical, and you can skip over them. Still, overall this is a pretty easy read and accessible even for many non-technical readers. This was intentional on the author's part. So even if your technology quotient is fairly low, this book is still understandable. Despite the fact that he wrote the first edition of this book only five years ago, this second edition includes important new material that makes it even more applicable to today's technology and the lightning-fast dissemination of data. I found the new section on artificial intelligence quite enlightening.
I highly recommend this book to anyone who wants to learn more about how computers work and impact privacy and security in the modern world.
Brian W. Kernighan's second edition of Understanding the Digital World is worth a read for computer enthusiasts of any skill level.
Image by:opensource.com
Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.How I run my blog on a Raspberry Pi
Like a lot of folks who enjoy tinkering with technology, I now have a small but growing collection of Raspberry Pi boxes around my house. I've used them for various projects: A PiHole network ad blocker, an OctoPi 3D print server, and a Minecraft server, among others.
However, the most custom project I've done is setting up a Raspberry Pi to act as a web server to host my own blog site, mandclu.com. I got the idea while researching for an interview I did a couple of years ago.
The project has evolved significantly since it started, so I thought it would be interesting to share.
More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Why run my own web server?I started building websites a little over two decades ago. I used various hosting solutions at that time, including a (nearly) bare-metal Linode instance where I had to install and configure all the software myself.
I recently had a small account at a major hosting company that I used to serve a couple of personal projects. Over time, I've found that I'm less interested in using my time for freelance projects, so the cost was getting harder to justify. Because of the security measures built into their hosting platform, I also felt frustrated being limited by what tools I could use and how I could use them.
In short, I wanted to run my own server because it would be free, in the sense of free speech, free beer, and, as I would soon learn, free puppies.
Would I recommend that everyone host their own website? Absolutely not. It's been a fun project, and I've learned a ton along the way, but if my website was down for a few hours (or potentially longer) because of local power or network outage, I could live with that.
In a previous article, I discussed why I chose Drupal for my site. While I think it's a powerful and infinitely flexible solution, the steps below largely apply to any PHP-based CMS or development framework you might want to host.
A Raspberry Pi web server: The maiden voyageI bought a Raspberry Pi 4 with 4Gb of RAM for the project. I had seen some documentation that the quality of the MicroSD card you use with a Pi makes a significant difference for performance, so I also tried to source a decent card. All told, I was probably getting close to US$ 100, including a case and power adapter.
The first decision was what OS to use. CentOS seemed like the best choice for something exposed to the internet, so I decided to go with that. As it turns out, CentOS had some marked differences from any other flavor of Linux I've used, especially because it wanted to reset the permissions of all the server logs on every reboot. I eventually figured out how to handle that gracefully, but it added to the adventure.
Next, it was time to get the Pi set up to act as a web server. I know some super-smart DevOps folks who prefer to use Nginx as a web server for their projects, but I'm personally a lot more familiar with Apache. Also, Drupal implements some security controls using .htaccess files, so if going with Nginx, you would need to manage equivalent restrictions in the server configuration. But the truth is that more than anything else, I wanted to go with the devil I knew.
Fortunately, a quick search can bring up a variety of tutorials on how to install the remaining parts of a LAMP stack on CentOS (or your preferred flavor of Linux). Even better, modern package managers like Yum make the process relatively painless. Drupal also has a few PHP requirements of its own, so there was an extra step to make sure I met those. Finally, I like to use APCu as a PHP-native data cache to help speed up the delivery of PHP sites, so I made sure that it got installed and the PHP extension enabled.
While searching for answers for the miscellaneous hiccups, I came across an interesting add-on that made managing the Pi as a web server much easier: Cockpit. It provides an easy-to-use graphical interface to see the status of the machine and all its software: How it's running (or if it isn't). You can see when updates are available and run them, access logs, and more. It even includes its own command-line interface, so you can completely manage pretty much everything from the one interface.
Installing Drupal on Raspberry Pi
Getting Drupal itself installed is pretty straightforward if you know the intended process. If you haven't already, install the Composer PHP dependency manager. Then you can install Drupal in a couple of steps:
composer create-project drupal/recommended-project my_site_name_dirConfigure your web server to use the web directory within that install location (my_site_name_dir in the example above) as the document root for a virtual host (or a server block in Nginx).
If you try to access the virtual host, Drupal triggers the rest of the installation process for you.
I decided to create the site on my laptop, then push the site code to a Git repo on GitLab, and pull it down to the server from there, but that isn't strictly necessary if you're just looking to try out Drupal on your Pi.
Getting the word out (of the network)I now had my Raspberry Pi working as a web server and my Drupal site running well on top of it. Great! But no one outside my network could access it.
I went into my router's web UI and used port forwarding to make sure incoming web requests (ports 80 and 443) would get directed to the Pi. I did that in a couple of minutes. But how would people find the site?
I bought a domain name, and my registrar had its own utility for dynamic DNS, which is great because a drawback of using your home's internet connection is that home users typically don't have a static IP. After installing their utility and waiting for the DNS setup to resolve, users could reach my new website at mandclu.com.
Of course, the site also needed to allow for secure connections, so I also needed to add an SSL certificate. It used to be that this meant purchasing a certificate that would have cost more than the Pi itself and paying that again every year on renewal. Fortunately, Let's Encrypt achieves a similar result for free. You can even install the certbot to renew the certificate automatically.
My own Raspberry Pi web serverI was really happy with the result. Was it as fast as expensive, enterprise-grade hosting? Nope. But I could host my own site for free (or, more accurately, for the cost of the electricity to power the Pi), and I had complete freedom to configure the server any way I wanted.
It did seem like the site started to show occasional slowdowns over time, but it was fast enough for the meager traffic I was getting (almost none), so it met my needs.
I enjoyed playing with the site styling and posting content when I felt inspired to write it. And then, the move came.
I moved homes at the end of 2020 (literally the last day of the year). One of the downsides of having your website hosted from a Pi box in your living room is that moving house means your website goes down for a while. In my case, a few weeks since having my pet project website back online wasn't a major priority.
Eventually, I got my web server Pi hooked up and plugged in again and was ready to add some new content. I was surprised to find that the site was running noticeably slower than I remembered. A great thing about being a web developer is that you make friends with lots of smart people, so I reached out to a friend who mentioned that MicroSD cards sometimes slow down over time under regular use.
Speeding up my web serverI decided it was time to to rebuild the server, so I made several changes. For starters, I had since bought an 8GB Pi 4 to use as a Minecrafter server, but then my son's interest in the game had fallen off, so I decided to use that hardware for the new version. Instead of a MicroSD for storage, I bought a low-capacity NVMe SSD and a USB 3 enclosure for it. Those two elements probably cost as much as I had previously spent on the Pi, MicroSD card, power supply, and case, but the server is still running really well nearly a year later.
Instead of just copying over everything I had previously installed, I decided to reinstall the software. Moving to the 8GB Pi 4 meant that I needed a 64-bit OS, which meant that Ubuntu server was my best option. There are more options today, but I've been really happy with Ubuntu, even though there was a new learning curve. Some directories are in different places. I had to get used to Apt instead of Yum for installing new packages, and so on. But the overall process was really the same, with a few minor differences in the steps themselves.
Another significant change I decided to make during the rebuild was to add Cloudflare as a content delivery network (CDN) to speed up the delivery of the site. In its most basic form, a CDN speeds up a website's delivery by keeping cached versions of the site's files at various local points of presence (PoP) distributed worldwide. Fortunately, Cloudflare has a free plan, so I decided to put that in front of my Pi-hosted website.
The resultThe upgraded version of the Pi web server clocks in fast. Like, really fast:
I've run speed tests on many different websites (admittedly, most of them were Drupal), and these are some of the best scores I've seen. It does help that the site is simple by design. If there were more images on the site, it would probably score a little lower, especially for mobile (where the Lighthouse test throttles bandwidth to simulate a slow 4G connection).
It's worth pointing out the accessibility scores as well—with no effort on my part. Another advantage of running my site on Drupal is being able to build on top of a framework already rigorously tested to be easily used on screen readers and other assistive technologies.
The only work I had to do to hit the best practices score was installing and configuring the free Security Kit module.
Build your own Raspberry Pi serverIf you've wanted to try Drupal for a personal web project, and especially if you happen to have an extra Raspberry Pi gathering dust, then I hope you'll try setting up your own server.
I set up a Raspberry Pi to act as a web server to host my personal blog on Drupal.
Image by:Dwight Sipler on Flickr
Drupal Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.How I use Drupal as an advanced blogging platform
A couple of years ago, I decided I wanted a place to post my thoughts and play around with some emerging web technologies.
More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Why Drupal?I make my living working with and evangelizing Drupal. So there's definitely some applicability to the saying, "when your only tool is a hammer, every problem looks like a nail."
In truth, I had considered using some static site solutions like Gatsby or Jekyll and then using free hosting options from GitHub or GitLab.
However, one of the things I enjoy about Drupal is how quickly I can create and adapt content structures and have the ability to draw on the considerable library of community-provided modules to extend its capabilities, all of which you can use for free.
Let me use a specific example to illustrate its flexibility. Initially, I wanted my site to be a blog, and there are a ton of options for that, including pure SaaS offerings like Medium. As I was working on it, however, it occurred to me that I could also use the site as a place to keep track of my various public speaking engagements.
Speaking of DrupalI maintain several Drupal modules that I share with the community. I have also had the privilege of working with some very smart people, so I regularly have specific ideas around how you can use Drupal for some everyday use cases in easy and powerful ways.
When I submit a session proposal to an open source conference, the organizers often want to see a list of my recent public speaking engagements. I used to keep track of that in various ways, including a Google Doc. Then I have to remember to keep the list up to date (invariably, I don't). Then if a particular conference wants the information in a different format, I have to adapt all the content myself: Add longer descriptions, video links, and more.
I added a content type (a customizable template for structured storage) on my Drupal site called Talk. By default, it had a title and description, so I added fields for the schedule, location, and a link to the video.
Image by:
(Martin Anderson-Clutz, CC BY-SA 4.0)
Drupal has robust media handling out of the box, so when a video of one of my talks gets posted online, all I have to do is copy and paste the URL into my site. I can display that as a link or an embedded video player. For better performance, I also added a community module that allows the site to lazy load the video players.
I recently realized that I often post the slides from my talks to SlideShare, so I added a field to keep track of those links. That change took less than a minute to implement. Now, if someone wants to revisit the content, they can find a page on my site with all the information: When it happened, the description, a video, and the slides.
Drupal also includes a powerful visual query builder called Views, great for generating content lists to meet specific needs in a point-and-click interface. I used Views to create lists of upcoming and past talks, including small video players for the past talks and full-sized players if visitors click through to the details. If a conference requires my speaking experience in a specific format, I can easily create a new View to provide precisely the information they need, in the format they need it, with just a few clicks.
Key Drupal featuresAnother great thing about creating a site with Drupal is that you get to build on the work of one of the most engaged open source communities in the world. And they've built in some impressive capabilities, which my site benefits from.
PerformanceThey built Drupal to use multiple layers of caching for better performance. It also supports BigPipe progressive rendering, allowing even complex pages to start rendering in the browser right away.
AccessibilityThe Drupal community put a ton of work into the spiffy new Olivero theme. It not only looks beautiful but got built for easy navigation by screen readers and other assistive technologies. Upon review, the National Federation of the Blind gushed that the team "knocked it out of the park." For my site, I worked out a way to override Olivero and make some minor stylistic changes without directly editing the theme. That allowed me to give my site some unique flavor while ensuring I can update it painlessly as Drupal continues to evolve.
My favorite Drupal modulesWhile Drupal is incredibly flexible out of the box, the possibilities are endless when you start to add from the immense library of modules available from the community, all for free. Here are a few that make my site better.
Smart DateThis provides an intuitive, easy-to-use widget for entering dates, designed to align with the UX of typical calendar applications. It handles timezones elegantly. And it also allows for the natural language formatting of date ranges: Outputting "9-10AM on Tuesday, Feb 15" instead of "9:00AM Tuesday, Feb 15 - 10:00AM Tuesday, Feb 15" for example.
BlazyBy default, Drupal adds the loading="lazy" HTML attribute to img tags, but all browsers don't support this. Also, my site loads multiple video players on the home page, which can cause a performance hit. The Blazy module adds support for multiple lazy loading libraries and can lazy load video players.
PathautoDrupal long could create SEO-friendly URLs like /node/123 based on the numerical ID of a piece of content. Pathauto expands the sophistication of the automatically generated URLs to include text, such as the type of content, how you categorized it, and the title (/articles/blog/composable-date-formatter-drupal). You can use this for creating breadcrumbs as well.
If I wanted to get serious about SEO, I could add the automatic generation of various meta tags using the Metatag module, and also add meta tags so content looks better when shared on social media. If I wanted to get really serious about SEO, I could also add the Real-time SEO module to help me aggressively optimize for specific keywords. But, I doubt I'll ever take it that seriously.
Final thoughtsAre you interested in giving Drupal a try? There are one-click demos available at Simplytest.me or DrupalPod you can use to test out Drupal with a cloud-hosted IDE (if you want to get your hands dirty). In another article, I'll discuss a low-cost (basically free, once it's up and running) way to host your Drupal site.
Drupal is a flexible and powerful blogging platform that I enjoy customizing to suit my needs.
Image by:Opensource.com
Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.Manage Linux users' home directories with systemd-homed
The entire systemd concept and implementation have introduced many changes since it began to replace the old SystemV startup and init tools. Over time, systemd has been extended into many other segments of the Linux environment.
One relatively new service, systemd-homed, extends the reach of systemd into the management of users' home directories. The feature enforces human user access only and restricts system users in the User ID (UID) range between 0 and 999. I support the systemd plan to take over the world, but I wondered if this was a bit excessive. Then I did some research.
More for sysadmins Enable Sysadmin blog The Automated Enterprise: A guide to managing IT with automation eBook: Ansible automation for Sysadmins Tales from the field: A system administrator's guide to IT automation eBook: A guide to Kubernetes for SREs and sysadmins Latest sysadmin articles What is systemd-homed?The systemd-homed service supports user account portability independent of the underlying computer system. A practical example is to carry around your home directory on a USB thumb drive and plug it into any system which would automatically recognize and mount it. According to Lennart Poettering, lead developer of systemd, access to a user's home directory should not be allowed to anyone unless the user is logged in. The systemd-homed service is designed to enhance security, especially for mobile devices such as laptops. It also seems like a tool that might be useful with containers.
This objective can only be achieved if the home directory contains all user metadata. The ~/.identity file stores user account information, which is only accessible to systemd-homed when the password is entered. This file holds all of the account metadata, including everything Linux needs to know about you, so that the home directory is portable to any Linux host that uses systemd-homed. This approach prevents having an account with a stored password on every system you might need to use.
The home directory can also be encrypted using your password. Under systemd-homed, your home directory stores your password with all of your user metadata. Your encrypted password is not stored anywhere else thus cannot be accessed by anyone. Although the methods used to encrypt and store passwords for modern Linux systems are considered to be unbreakable, the best safeguard is to prevent them from being accessed in the first place. Assumptions about the invulnerability of their security have led many to ruin.
This service is primarily intended for use with portable devices such as laptops. Poettering states, "Homed is intended primarily for client machines, i.e., laptops and thus machines you typically ssh from a lot more than ssh to, if you follow what I mean." It is not intended for use on servers or workstations that are tethered to a single location by cables or locked into a server room.
The systemd-homed service is enabled by default on new installations—at least for Fedora, which is the distro that I use. This configuration is by design, and I don't expect that to change. User accounts are not affected or altered in any way on systems with existing filesystems, upgrades or reinstallations that keep the existing partitions, and logical volumes.
Creating controlled usersTraditional tools such as useradd create accounts and home directories that systemd-homed does not manage. Therefore, if you continue to use the conventional user management tools, the home directories on your home directories are not managed by systemd-homed. This is also the case with the non-root user account created during a new installation.
The homectl commandThe homectl command creates user accounts that systemd-homed manages. Using the homectl command to create a new account generates the metadata needed to make the home directory portable.
The homectl command man page has a good explanation of the objectives and function of the systemd-homed service. However, reading the homectl man page is quite interesting, especially the Example section. Of the five examples, three show how to create user accounts with specific limits imposed, such as a maximum number of concurrent processes or a maximum amount of disk space.
In a non-homectl setup, the /etc/security/limits.conf file imposes these limits. The only advantage I can see to this is that it adds a user and applies the limits with a single command. With the traditional method, the sysadmin must configure the limits.conf file manually.
LimitationsThe only significant limitation I am aware of is that it is not possible to access a user home directory remotely using OpenSSH. This limitation is due to the current inability of PAM to provide access to a home directory managed by homectl. Poettering seems doubtful that this can be overcome. This issue would prevent me from using systemd-homed for my home directory on my primary workstation or even my laptop. I typically log into both computers remotely several times per day using SSH, so this is a showstopper for me.
The other concern I can see is that you still need a Linux computer for use with a USB thumb drive with your home directory on it, and that computer needs to have systemd-homed running.
It is optionalYou don't have to use it, however. I plan to continue using the traditional tools for user management to support my workflow. The default for the few distros I have some little knowledge of, including Fedora, is for the systemd-homed service to be enabled and running. You can disable and stop the systemd-homed service without impacting traditional user accounts.
Final thoughtsSysadmins can use the systemd-homed service for a secure form of management of roaming users' home directories. It is useful on portable devices like laptops and can be especially useful for users who carry a thumb drive containing only their home directories to plug it into any convenient Linux computer.
The primary limitation of using systemd-homed is that it is impossible to log in remotely using SSH. And even though the systemd-homed is enabled by default, it does not affect home directories created with the useradd command. I do need to point out that—like many systemd tools—systemd-homed is optional. So I just stopped and disabled the service.
If I need to take my home directory in a package smaller than my laptop, I can just use a live USB with persistent storage.
Resources- https://systemd.io/HOME_DIRECTORY/
- https://www.freedesktop.org/software/systemd/man/homectl.html
- https://www.freedesktop.org/software/systemd/man/systemd-homed.service.html
- https://wiki.archlinux.org/title/Systemd-homed
Sysadmins can use the systemd-homed service for a secure form of management of roaming users' home directories.
Image by:Opensource.com
Sysadmin Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.