best gaming monitor under 250 in 2019

best gaming monitor under 250 in 2019

are you gamer and looking for a gaming monitor and your range is around 250 dollars. so no problem we are going to show you best gaming monitors under 250 dollars in this video. we select them on our personal experience and compare on price, quality and much more.

best gaming monitors under 250 dollars in 2019

you can buy these best gaming monitors under 250 on amazon easily. https://amzn.to/2NddTv0

BlackArch Linux 2017-03-01 Hacking Distro Released With 50 New Tools And Kernel 4.9.11

The developers of BlackArch ethical hacking distro have released the new ISO images of their operating system. BlackArch Linux 2017-03-01 is now available with 50 new hacking tools, Linux kernel 4.9.11, and updated packages. The users can visit the BlackArch website and download the latest ISO images. 
Whenever we talk about Linux Kali alternatives, we often end up talking about Parrot OS. But, there’s another great option that’s based on Arch Linux. Yes, I’m talking about BlackArch Linux. I keep tracking its releases regularly, and today I’ll tell you about the freshly baked BlackArch Linux 2017-03-01.
BlackArch Linux 2017-03-01 is now available with an updated build that consists of lots of updated components and packages. This update of ethical hacking distro has added more than 50 new tools.

BlackArch Linux 2017-03-01 new features and changes:

  • All system packages updated
  • All BlackArch tools added
  • 50+ new tools added
  • Linux kernel 4.9.11
  • Several fixes in installs and dependencies
  • Menu entries for windows managers

New ISOs available — Download and installation

So, if you’re willing to try out the new tools and get these fixes, you can go ahead and grab the updated ISO files in Live and Netinstall version. The ISO files are available in both 64-bit and 32-bit versions. The Live ISO version has a complete and functional BlackArch Linux system and netinstall image is a light ISO for bootstrapping systems.
You can grab the torrent and ISO files here from their website.
It should be noted that BlackArch devs don’t recommend the use of UNetBootIn to write the ISO to USB drives. Instead, they ask you to use the following code. Here, /dev/sdX is your USB drive and file.iso is your ISO file:
The default login for ISO files is root:blackarch

My 10 Years Experience As a Linux Desktop User

I've been a regular desktop Linux user for just about a decade now. What has changed in that time? Keep reading for a look back at all the ways that desktop Linux has become easier to use -- and those in which it has become more difficult -- over the past ten years.
I installed Linux to my laptop for the first time in the summer of 2006. I started with SUSE, then moved onto Mandriva and finally settled on Fedora Core.
By early 2007 I was using Fedora full time. There was no more Windows partition on my laptop. When I ran into problems or incompatibilities with Linux, my options were to sink or swim. There was no Windows to revert back to.

A Decade of Improvement (Mostly)

Circa 2007, running Linux as a desktop operating system was tricky in most ways. Here's a look at the biggest pain points, and how they have been resolved today...
Wireless connectivity
I owned two wireless cards in 2007 (and as a college student, I wasn't in a position to invest in new ones): A pluggable USB device with some kind of RaLink chipset, and a built-in card with a Broadcom chipset. The RaLink device worked with ndiswrapper, a utility that let you use Windows drivers to control networking devices in Linux. But it would crash my entire PC periodically. I never figured that chip out fully. I have no idea if it would work better today.
Meanwhile, the Broadcom chip worked well, but only if you installed proprietary firmware that you "cut" out from a Windows system. This was not beginner-friendly work, and it took me a long time to figure it out. It also posed legal issues because the firmware was not licensed for use on Fedora.
Today, life is much easier for Broadcom owners. Free firmware is now available, and most Linux distributions come with tools that will extract it automatically. Plus, my laptop today has a wireless card with an Intel chipset, which performs excellently on Linux with absolutely no configuration required on my part.
Display
My laptop in 2007 had an Intel GPU, which worked very well out-of-the-box in Fedora. But I also had a desktop with an NVIDIA chip. It functioned by default, but only in a very primitive way. In order to get the NVIDIA chip to support graphical acceleration and decent screen resolution, you had to install a closed-source driver. That was easier said than done because to run the installation script, you had to stop your display entirely and work from a terminal.
In retrospect, the process was not very difficult for anyone with command-line experience. You just downloaded the script, made the file executable with chmod and ran it with a ./ command. But for a Linux neophyte like myself, it was a huge challenge. I suppose the work paid off, though, because it was a sort of baptism by fire in learning about the CLI.
A decade later, it has been years since I have had to install a display driver manually on Linux. Mostly that has been because all of the computers I have acquired since I began using Linux have Intel GPUs. But I gather that NVIDIA chips are now much better supported in Linux, thanks to open source driver projects like nouveau.
Office Software
I was impressed in 2007 by OpenOffice, the office software that came preinstalled with most Linux distributions. It more than met my needs at the time, which mostly involved writing mediocre history and English papers for my college courses. And I loved that it had a neat button for turning any type of document into a PDF file. My friends were jealous of that.
However, my relationship with OpenOffice and its descendant, LibreOffice (which I now use most of the time), has become rockier since 2007. Both platforms remain solid office suites. But my requirements for compatibility with Microsoft Office are much more strict today. I now write scholarly articles and books (not to mention mediocre blog posts). Every academic press with which I have ever worked expects files to be submitted in Word format, and I need to ensure that everything I see when working on a document is identical to what the editors will see when opening it in Word. For that reason, I keep Word on my Linux system and run it via Wine when I am working on a manuscript. I still use LibreOffice for writing less official documents, however.
PDF Readers
Working with PDF files is the one area that has perhaps become more challenging during my decade as a desktop Linux user.
Back in 2007, you could install a native version of Adobe Acrobat Reader on Linux. But Adobe stopped supporting Linux with Reader 9.
In general that's not a problem. Open source PDF readers can display most PDF documents just fine. But occasionally, I need to work with special types of PDFs that, for whatever silly reason, are only compatible with Acrobat. (Here's an example.) I have yet to find a good solution to this. Acrobat 9 installed on Ubuntu won't let me make comments on PDF documents. I can't get the Windows version of Acrobat 11 to work via Wine. As a workaround, I have to boot up a Windows virtual machine. I curse Adobe while I wait for the Windows virtual machine to boot, and long for the days when Acrobat Reader "just worked" on Linux.
Netflix
Last but not least there's Netflix. The change here is pretty simple. Back in 2007, when Netflix streaming first debuted, it did not work with Linux at all.
Then, in 2012, it became possible to run Netflix on Linux using a special script that was not supported by Netflix. By 2014 Netflix officially supported Ubuntu. Today, the service "just works" for me in Chrome -- which is good because Netflix is the only thing standing between me and the outrageous cost of cable.

3 little things in Linux 4.10 that will make a big difference

Linux never sleeps. Linus Torvalds is already hard at work pulling together changes for the next version of the kernel (4.11). But with Linux 4.10 now out, three groups of changes are worth paying close attention to because they improve performance and enable feature sets that weren’t possible before on Linux.
Here’s a rundown of those changes to 4.10 and what they likely will mean for you, your cloud providers, and your Linux applications.

1. Virtualized GPUs

One class of hardware that’s always been difficult to emulate in virtual machines is GPUs. Typically, VMs provide their own custom video driver (slow), and graphics calls have to be translated (slow) back and forth between guest and host. The ideal solution would be to run the same graphics driver in a guest that you use on the host itself and have all the needed calls simply relayed back to the GPU.
There’s more here than being able to play, say, Battlefield 1 in a VM. Every resource provided by the GPU, including GPU-accelerated processing provided through libraries like CUDA, would be available to the VM as if it were running on regular, unvirtualized iron.
Intel introduced a set of processor extensions, called GVT-G, to enable these things, but only in Linux 4.10 does OS-level support finally show up. In addition to kernel-level support for this feature via KVM (KVMGT), Intel has contributed support for the Xen and QEMU hypervisors.
Direct GVT-G support inside the kernel means third-party products can leverage it without anything more than the “vanilla” kernel. It’s akin to how Docker turned a collection of native Linux features into a hugely successful devops solution; a big part of the success was because those features were available to most modern incarnations of Linux.

2. Better cache control technology

CPUs today are staggeringly fast. What’s slow is when they have to pull data from main memory, so crucial data is cached close to the CPU. That strategy that continues to this day, with ever-growing cache sizes. But at least some of the burden for cache management falls to the OS as well, and Linux 4.10 introduces some new techniques and tooling.
First is support for Intel Cache Allocation Technology (CAT), a feature available on Haswell-generation chip sets or later. With CAT, space in the L3 (and later L2) cache can be apportioned and reserved for specific tasks, so a given application’s cache isn’t flushed out by other applications. CAT also apparently provides some protection against cache-based timing attacks—no small consideration given that every nook and cranny of modern computing is being scrutinized as a possible attack vector.
Hand in hand with support for this feature is a new system tool, perf c2c. In systems with multiple sockets and nonuniform memory access (NUMA), threads running on different CPUs can make the case less efficient if they try to modify the same memory segments. The perf c2c tool helps tease out such performance issues, although like CAT it’s based on features provided specifically by Intel processors.

3. Writeback management

“Since the dawn of time, the way Linux synchronizes to disk the data written to memory by processes (aka. background writeback) has sucked,” writes KernelNewbies.org in its explanation of how Linux writeback management has been improved in 4.10. From now on, the I/O requests queue is monitored for the latency of its requests, and operations that cause longer latencies—heavy write operations in particular—can be throttled to allow other threads to have a chance.
In roughly the same vein, an experimental feature, off by default, provides a RAID5 writeback cache, so writes across multiple disks in a RAID5 array can be batched together. Another experimental feature, hybrid block polling (also off by default), provides a new way to poll devices that use a lot of throughput. Such polling helps improve performance, but if done too often it makes things worse; the new polling mechanisms are designed to ensure polling improves performance without driving up CPU usage.
This is all likely to have a major payoff with cloud computing instances that are optimized for heavy I/O. Amazon has several instance types in this class, and a kernel-level improvement that provides more balance between reads and writes, essentially for free, will be welcomed by them—and their industry rivals. This rising tide can’t help but lift all boats.

How to run commands at shutdown on Linux

Linux and Unix systems have long made it pretty easy to run a command on boot. Just add your command to /etc/rc.local and away you go. But as it turns out, running a command on shutdown is a little more complicated.
Why would you want to run a command as the computer shuts down? Perhaps you want to de-register a machine or service from a database. Maybe you want to copy data from a volatile storage system to a permanent location. Want your computer to post "#RIP me!" on its Twitter account before it shuts down?
I should clarify at this point that I'm talking about so-called "one-shot" commands as opposed to stopping a daemon. It's tempting to think of them the same way, though. If you're familiar with SysVinit you might think, "Oh, I'll just create a kill script." For example, /etc/rc.d/rc3.d/K99runmycommandatshutdownshould get invoked when your system exits runlevel 3. After all, that's how the scripts in /etc/init.d/ get stopped.
That's a great guess, but it turns out that it's wrong. SysVinit does not blindly run the kill scripts. Instead, it looks for (on Red Hat 6) /var/lock/subsys/service_name (where service_name here would be runmycommandatshutdown). So you have to get a little bit tricky and treat your script like a regular service. The script below gives an example:
#!/bin/sh
# chkconfig: 2345 20 80
# description: An example init script to run a command at shutdown
 
# runmycommandatshutdown runs a command at shutdown. Very creative.
 
LOCKFILE=/var/lock/subsys/
start(){
    # Touch our lock file so that stopping will work correctly
    touch ${LOCKFILE}
}
 
stop(){
# Remove our lock file
rm ${LOCKFILE}
# Run that command that we wanted to run
mycommand
}
 
case "$1" in
    start) start;;
    stop) stop;;
    *)
        echo $"Usage: $0 {start|stop}"
        exit 1
esac
exit 0
After putting that script in /etc/init.d/runmycommandatshutdown and enabling it with chkconfig on runmycommandatshutdown, your command will be run at shutdown.

systemd

All of that is great, but what if you're running a distribution of Linux that uses systemd instead of SysVinit? Turns out, it's much simpler with systemd. All you have to do is put your script in /usr/lib/systemd/system-shutdown/, which is handled by systemd-halt.service. Of course, if you need to manage dependencies in a particular order (e.g., you can't post a tweet if the network stack is down), then you can write a systemd service unit file. For example:
[Unit]
Description=Run mycommand at shutdown
Requires=network.target
DefaultDependencies=no
Before=shutdown.target reboot.target
 
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=mycommand
 
[Install]
WantedBy=multi-user.target

How To Make Your Own Fastest Linux Computer System On Small Budget



There’s nothing more satisfying than watching a system boot up almost instantaneously when the power switch is hit. Long gone are the days of going to make yourself a brew while those spinning platters buzz and the display kicks into life, lazily dragging you into the GUI you call home.
But surely that luxury of speed is reserved for those who are willing to drop £1,000+ on a new system? Fortunately, this is not the case anymore. With advancements in technology over the last six years, and Intel’s aggressive push to keep reinvigorating its chipsets each and every generation, we’re starting to see more and more affordable budget, speed-oriented components finally making it to market.
The SSD has succeeded the hard drive with sub 10-second boot times and lightning quick file transfers. However, three years on and we’ve seen both the rise and fall of the SATA III bus. This was a standard that was supposed to last us until 2020, but now lies completely saturated, with only the ever enduring HDD still making good use of the connectivity.
Fortunately for us, workarounds have been found in the search for ever increasing performance and ever increasing speed. Utilising the PCIe bus to transfer data to and from any hard drive to the processor has proved to provide almost limitless potential when it comes to file transfer speeds – at least for the time being. And as  M.2 and U.2 PCIe SSDs have matured, so too  have their sequential read and writes, alongside a continual plummet in cost to  produce them. What this has led to is the potential to piece together a system for just over £400. 
And that’s a system we intend to walk you through today. A system including an Intel Core i5-6500 quad-core processor, one of the latest chipsets featuring the lightning fast and insanely energy efficient DDR4 memory standard, alongside a PCIe M.2 SSD. In our case, we’ve opted for one of Samsung’s OEM PM961 M.2 drives, specifically the 128GB variant.

An introduction to the Linux boot and startup processes


Understanding the Linux boot and startup processes is important to being able to both configure Linux and to resolving startup issues. This article presents an overview of the bootup sequence using the GRUB2 bootloader and the startup sequence as performed by the systemd initialization system.
In reality, there are two sequences of events that are required to boot a Linux computer and make it usable: boot and startup. The boot sequence starts when the computer is turned on, and is completed when the kernel is initialized and systemd is launched. The startup process then takes over and finishes the task of getting the Linux computer into an operational state.
Overall, the Linux boot and startup process is fairly simple to understand. It is comprised of the following steps which will be described in more detail in the following sections.
  • BIOS POST
  • Boot loader (GRUB2)
  • Kernel initialization
  • Start systemd, the parent of all processes.
Note that this article covers GRUB2 and systemd because they are the current boot loader and initialization software for most major distributions. Other software options have been used historically and are still found in some distributions.

The boot process

The boot process can be initiated in one of a couple ways. First, if power is turned off, turning on the power will begin the boot process. If the computer is already running a local user, including root or an unprivileged user, the user can programmatically initiate the boot sequence by using the GUI or command line to initiate a reboot. A reboot will first do a shutdown and then restart the computer.

BIOS POST

The first step of the Linux boot process really has nothing whatever to do with Linux. This is the hardware portion of the boot process and is the same for any operating system. When power is first applied to the computer it runs the POST (Power On Self Test) which is part of the BIOS (Basic I/O System).
When IBM designed the first PC back in 1981, BIOS was designed to initialize the hardware components. POST is the part of BIOS whose task is to ensure that the computer hardware functioned correctly. If POST fails, the computer may not be usable and so the boot process does not continue.
BIOS POST checks the basic operability of the hardware and then it issues a BIOS interrupt, INT 13H, which locates the boot sectors on any attached bootable devices. The first boot sector it finds that contains a valid boot record is loaded into RAM and control is then transferred to the code that was loaded from the boot sector.
The boot sector is really the first stage of the boot loader. There are three boot loaders used by most Linux distributions, GRUB, GRUB2, and LILO. GRUB2 is the newest and is used much more frequently these days than the other older options.

GRUB2

GRUB2 stands for "GRand Unified Bootloader, version 2" and it is now the primary bootloader for most current Linux distributions. GRUB2 is the program which makes the computer just smart enough to find the operating system kernel and load it into memory. Because it is easier to write and say GRUB than GRUB2, I may use the term GRUB in this document but I will be referring to GRUB2 unless specified otherwise.
GRUB has been designed to be compatible with the multiboot specification which allows GRUB to boot many versions of Linux and other free operating systems; it can also chain load the boot record of proprietary operating systems.
GRUB can also allow the user to choose to boot from among several different kernels for any given Linux distribution. This affords the ability to boot to a previous kernel version if an updated one fails somehow or is incompatible with an important piece of software. GRUB can be configured using the /boot/grub/grub.conf file.
GRUB1 is now considered to be legacy and has been replaced in most modern distributions with GRUB2, which is a rewrite of GRUB1. Red Hat based distros upgraded to GRUB2 around Fedora 15 and CentOS/RHEL 7. GRUB2 provides the same boot functionality as GRUB1 but GRUB2 is also a mainframe-like command-based pre-OS environment and allows more flexibility during the pre-boot phase. GRUB2 is configured with /boot/grub2/grub.cfg.
The primary function of either GRUB is to get the Linux kernel loaded into memory and running. Both versions of GRUB work essentially the same way and have the same three stages, but I will use GRUB2 for this discussion of how GRUB does its job. The configuration of GRUB or GRUB2 and the use of GRUB2 commands is outside the scope of this article.
Although GRUB2 does not officially use the stage notation for the three stages of GRUB2, it is convenient to refer to them in that way, so I will in this article.

Stage 1

As mentioned in the BIOS POST section, at the end of POST, BIOS searches the attached disks for a boot record, usually located in the Master Boot Record (MBR), it loads the first one it finds into memory and then starts execution of the boot record. The bootstrap code, i.e., GRUB2 stage 1, is very small because it must fit into the first 512-byte sector on the hard drive along with the partition table. The total amount of space allocated for the actual bootstrap code in a classic generic MBR is 446 bytes. The 446 Byte file for stage 1 is named boot.img and does not contain the partition table which is added to the boot record separately.
Because the boot record must be so small, it is also not very smart and does not understand filesystem structures. Therefore the sole purpose of stage 1 is to locate and load stage 1.5. In order to accomplish this, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the drive. After loading GRUB stage 1.5 into RAM, stage 1 turns control over to stage 1.5.

Stage 1.5

As mentioned above, stage 1.5 of GRUB must be located in the space between the boot record itself and the first partition on the disk drive. This space was left unused historically for technical reasons. The first partition on the hard drive begins at sector 63 and with the MBR in sector 0, that leaves 62 512-byte sectors—31,744 bytes—in which to store the core.img file which is stage 1.5 of GRUB. The core.img file is 25,389 Bytes so there is plenty of space available between the MBR and the first disk partition in which to store it.
Because of the larger amount of code that can be accommodated for stage 1.5, it can have enough code to contain a few common filesystem drivers, such as the standard EXT and other Linux filesystems, FAT, and NTFS. The GRUB2 core.img is much more complex and capable than the older GRUB1 stage 1.5. This means that stage 2 of GRUB2 can be located on a standard EXT filesystem but it cannot be located on a logical volume. So the standard location for the stage 2 files is in the /boot filesystem, specifically /boot/grub2.
Note that the /boot directory must be located on a filesystem that is supported by GRUB. Not all filesystems are. The function of stage 1.5 is to begin execution with the filesystem drivers necessary to locate the stage 2 files in the /boot filesystem and load the needed drivers.

Stage 2

All of the files for GRUB stage 2 are located in the /boot/grub2 directory and several subdirectories. GRUB2 does not have an image file like stages 1 and 2. Instead, it consists mostly of runtime kernel modules that are loaded as needed from the /boot/grub2/i386-pc directory.
The function of GRUB2 stage 2 is to locate and load a Linux kernel into RAM and turn control of the computer over to the kernel. The kernel and its associated files are located in the /boot directory. The kernel files are identifiable as they are all named starting with vmlinuz. You can list the contents of the /boot directory to see the currently installed kernels on your system.
GRUB2, like GRUB1, supports booting from one of a selection of Linux kernels. The Red Hat package manager, DNF, supports keeping multiple versions of the kernel so that if a problem occurs with the newest one, an older version of the kernel can be booted. By default, GRUB provides a pre-boot menu of the installed kernels, including a rescue option and, if configured, a recovery option.
Stage 2 of GRUB2 loads the selected kernel into memory and turns control of the computer over to the kernel.

Kernel

All of the kernels are in a self-extracting, compressed format to save space. The kernels are located in the /boot directory, along with an initial RAM disk image, and device maps of the hard drives.
After the selected kernel is loaded into memory and begins executing, it must first extract itself from the compressed version of the file before it can perform any useful work. Once the kernel has extracted itself, it loads systemd, which is the replacement for the old SysV init program, and turns control over to it.
This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running.

The startup process

The startup process follows the boot process and brings the Linux computer up to an operational state in which it is usable for productive work.

systemd

systemd is the mother of all processes and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of its functions, which are far more extensive than the old init program, are to manage many aspects of a running Linux host, including mounting filesystems, and starting and managing system services required to have a productive Linux host. Any of systemd's tasks that are not related to the startup sequence are outside the scope of this article.
First, systemd mounts the filesystems as defined by /etc/fstab, including any swap files or partitions. At this point, it can access the configuration files located in /etc, including its own. It uses its configuration file, /etc/systemd/system/default.target, to determine which state or target, into which it should boot the host. The default.target file is only a symbolic link to the true target file. For a desktop workstation, this is typically going to be the graphical.target, which is equivalent to runlevel 5 in the old SystemV init. For a server, the default is more likely to be the multi-user.target which is like runlevel 3 in SystemV. The emergency.target is similar to single user mode.
Note that targets and services are systemd units.
Table 1, below, is a comparison of the systemd targets with the old SystemV startup runlevels. The systemd target aliases are provided by systemd for backward compatibility. The target aliases allow scripts—and many sysadmins like myself—to use SystemV commands like init 3 to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution.
SystemV Runlevelsystemd targetsystemd target aliasesDescription
 halt.target Halts the system without powering it down.
0poweroff.targetrunlevel0.targetHalts the system and turns the power off.
Semergency.target Single user mode. No services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system.
1rescue.targetrunlevel1.targetA base system including mounting the filesystems with only the most basic services running and a rescue shell on the main console.
2 runlevel2.targetMultiuser, without NFS but all other non-GUI services running.
3multi-user.targetrunlevel3.targetAll services running but command line interface (CLI) only.
4 runlevel4.targetUnused.
5graphical.targetrunlevel5.targetmulti-user with a GUI.
6reboot.targetrunlevel6.targetReboot
 default.target This target is always aliased with a symbolic link to either multi-user.target or graphical.target. systemd always uses the default.target to start the system. The default.target should never be aliased to halt.target, poweroff.target, or reboot.target.
Table 1: Comparison of SystemV runlevels with systemd targets and some target aliases.
Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies. These dependencies are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level.
systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd used those as configuration files to start the services described by the files. The deprecated network service is a good example of one of those that still use SystemV startup files in Fedora.
Figure 1, below, is copied directly from the bootup man page. It shows the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup.
The sysinit.target and basic.target targets can be considered as checkpoints in the startup process. Although systemd has as one of its design goals to start system services in parallel, there are still certain services and functional targets that must be started before other services and targets can be started. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled.
So the sysinit.target is reached when all of the units on which it depends are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services if one or more filesystems are encrypted, must be completed, but within the sysinit.target those tasks can be performed in parallel.
The sysinit.target starts up all of the low-level services and units required for the system to be marginally functional and that are required to enable moving on to the basic.target.
   local-fs-pre.target
            |
            v
   (various mounts and   (various swap   (various cryptsetup
    fsck services...)     devices...)        devices...)       (various low-level   (various low-level
            |                  |                  |             services: udevd,     API VFS mounts:
            v                  v                  v             tmpfiles, random     mqueue, configfs,
     local-fs.target      swap.target     cryptsetup.target    seed, sysctl, ...)      debugfs, ...)
            |                  |                  |                    |                    |
            \__________________|_________________ | ___________________|____________________/
                                                 \|/
                                                  v
                                           sysinit.target
                                                  |
             ____________________________________/|\________________________________________
            /                  |                  |                    |                    \
            |                  |                  |                    |                    |
            v                  v                  |                    v                    v
        (various           (various               |                (various          rescue.service
       timers...)          paths...)              |               sockets...)               |
            |                  |                  |                    |                    v
            v                  v                  |                    v              rescue.target
      timers.target      paths.target             |             sockets.target
            |                  |                  |                    |
            v                  \_________________ | ___________________/
                                                 \|/
                                                  v
                                            basic.target
                                                  |
             ____________________________________/|                                 emergency.service
            /                  |                  |                                         |
            |                  |                  |                                         v
            v                  v                  v                                 emergency.target
        display-        (various system    (various system
    manager.service         services           services)
            |             required for            |
            |            graphical UIs)           v
            |                  |           multi-user.target
            |                  |                  |
            \_________________ | _________________/
                              \|/
                               v
                     graphical.target
Figure 1: The systemd startup map.
After the sysinit.target is fulfilled, systemd next starts the basic.target, starting all of the units required to fulfill it. The basic target provides some additional functionality by starting units that re required for the next target. These include setting up things like paths to various executable directories, communication sockets, and timers.
Finally, the user-level targets, multi-user.target or graphical.target can be initialized. Notice that the multi-user.target must be reached before the graphical target dependencies can be met.
The underlined targets in Figure 1, are the usual startup targets. When one of these targets is reached, then startup has completed. If the multi-user.target is the default, then you should see a text mode login on the console. If graphical.target is the default, then you should see a graphical login; the specific GUI login screen you see will depend on the default display manager you use.

Issues

I recently had a need to change the default boot kernel on a Linux computer that used GRUB2. I found that some of the commands did not seem to work properly for me, or that I was not using them correctly. I am not yet certain which was the case, and need to do some more research.
The grub2-set-default command did not properly set the default kernel index for me in the /etc/default/grub file so that the desired alternate kernel did not boot. So I manually changed /etc/default/grub GRUB_DEFAULT=saved to GRUB_DEFAULT=2 where 2 is the index of the installed kernel I wanted to boot. Then I ran the command grub2-mkconfig > /boot/grub2/grub.cfg to create the new grub configuration file. This circumvention worked as expected and booted to the alternate kernel.

Conclusions

GRUB2 and the systemd init system are the key components in the boot and startup phases of most modern Linux distributions. Despite the fact that there has been controversy surrounding systemd especially, these two components work together smoothly to first load the kernel and then to start up all of the system services required to produce a functional Linux system.
Although I do find both GRUB2 and systemd more complex than their predecessors, they are also just as easy to learn and manage. The man pages have a great deal of information about systemd, and freedesktop.org has the complete set of systemd man pages online.