Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

BlackArch Linux 2017-03-01 Hacking Distro Released With 50 New Tools And Kernel 4.9.11

The developers of BlackArch ethical hacking distro have released the new ISO images of their operating system. BlackArch Linux 2017-03-01 is now available with 50 new hacking tools, Linux kernel 4.9.11, and updated packages. The users can visit the BlackArch website and download the latest ISO images. 
Whenever we talk about Linux Kali alternatives, we often end up talking about Parrot OS. But, there’s another great option that’s based on Arch Linux. Yes, I’m talking about BlackArch Linux. I keep tracking its releases regularly, and today I’ll tell you about the freshly baked BlackArch Linux 2017-03-01.
BlackArch Linux 2017-03-01 is now available with an updated build that consists of lots of updated components and packages. This update of ethical hacking distro has added more than 50 new tools.

BlackArch Linux 2017-03-01 new features and changes:

  • All system packages updated
  • All BlackArch tools added
  • 50+ new tools added
  • Linux kernel 4.9.11
  • Several fixes in installs and dependencies
  • Menu entries for windows managers

New ISOs available — Download and installation

So, if you’re willing to try out the new tools and get these fixes, you can go ahead and grab the updated ISO files in Live and Netinstall version. The ISO files are available in both 64-bit and 32-bit versions. The Live ISO version has a complete and functional BlackArch Linux system and netinstall image is a light ISO for bootstrapping systems.
You can grab the torrent and ISO files here from their website.
It should be noted that BlackArch devs don’t recommend the use of UNetBootIn to write the ISO to USB drives. Instead, they ask you to use the following code. Here, /dev/sdX is your USB drive and file.iso is your ISO file:
The default login for ISO files is root:blackarch

My 10 Years Experience As a Linux Desktop User

I've been a regular desktop Linux user for just about a decade now. What has changed in that time? Keep reading for a look back at all the ways that desktop Linux has become easier to use -- and those in which it has become more difficult -- over the past ten years.
I installed Linux to my laptop for the first time in the summer of 2006. I started with SUSE, then moved onto Mandriva and finally settled on Fedora Core.
By early 2007 I was using Fedora full time. There was no more Windows partition on my laptop. When I ran into problems or incompatibilities with Linux, my options were to sink or swim. There was no Windows to revert back to.

A Decade of Improvement (Mostly)

Circa 2007, running Linux as a desktop operating system was tricky in most ways. Here's a look at the biggest pain points, and how they have been resolved today...
Wireless connectivity
I owned two wireless cards in 2007 (and as a college student, I wasn't in a position to invest in new ones): A pluggable USB device with some kind of RaLink chipset, and a built-in card with a Broadcom chipset. The RaLink device worked with ndiswrapper, a utility that let you use Windows drivers to control networking devices in Linux. But it would crash my entire PC periodically. I never figured that chip out fully. I have no idea if it would work better today.
Meanwhile, the Broadcom chip worked well, but only if you installed proprietary firmware that you "cut" out from a Windows system. This was not beginner-friendly work, and it took me a long time to figure it out. It also posed legal issues because the firmware was not licensed for use on Fedora.
Today, life is much easier for Broadcom owners. Free firmware is now available, and most Linux distributions come with tools that will extract it automatically. Plus, my laptop today has a wireless card with an Intel chipset, which performs excellently on Linux with absolutely no configuration required on my part.
Display
My laptop in 2007 had an Intel GPU, which worked very well out-of-the-box in Fedora. But I also had a desktop with an NVIDIA chip. It functioned by default, but only in a very primitive way. In order to get the NVIDIA chip to support graphical acceleration and decent screen resolution, you had to install a closed-source driver. That was easier said than done because to run the installation script, you had to stop your display entirely and work from a terminal.
In retrospect, the process was not very difficult for anyone with command-line experience. You just downloaded the script, made the file executable with chmod and ran it with a ./ command. But for a Linux neophyte like myself, it was a huge challenge. I suppose the work paid off, though, because it was a sort of baptism by fire in learning about the CLI.
A decade later, it has been years since I have had to install a display driver manually on Linux. Mostly that has been because all of the computers I have acquired since I began using Linux have Intel GPUs. But I gather that NVIDIA chips are now much better supported in Linux, thanks to open source driver projects like nouveau.
Office Software
I was impressed in 2007 by OpenOffice, the office software that came preinstalled with most Linux distributions. It more than met my needs at the time, which mostly involved writing mediocre history and English papers for my college courses. And I loved that it had a neat button for turning any type of document into a PDF file. My friends were jealous of that.
However, my relationship with OpenOffice and its descendant, LibreOffice (which I now use most of the time), has become rockier since 2007. Both platforms remain solid office suites. But my requirements for compatibility with Microsoft Office are much more strict today. I now write scholarly articles and books (not to mention mediocre blog posts). Every academic press with which I have ever worked expects files to be submitted in Word format, and I need to ensure that everything I see when working on a document is identical to what the editors will see when opening it in Word. For that reason, I keep Word on my Linux system and run it via Wine when I am working on a manuscript. I still use LibreOffice for writing less official documents, however.
PDF Readers
Working with PDF files is the one area that has perhaps become more challenging during my decade as a desktop Linux user.
Back in 2007, you could install a native version of Adobe Acrobat Reader on Linux. But Adobe stopped supporting Linux with Reader 9.
In general that's not a problem. Open source PDF readers can display most PDF documents just fine. But occasionally, I need to work with special types of PDFs that, for whatever silly reason, are only compatible with Acrobat. (Here's an example.) I have yet to find a good solution to this. Acrobat 9 installed on Ubuntu won't let me make comments on PDF documents. I can't get the Windows version of Acrobat 11 to work via Wine. As a workaround, I have to boot up a Windows virtual machine. I curse Adobe while I wait for the Windows virtual machine to boot, and long for the days when Acrobat Reader "just worked" on Linux.
Netflix
Last but not least there's Netflix. The change here is pretty simple. Back in 2007, when Netflix streaming first debuted, it did not work with Linux at all.
Then, in 2012, it became possible to run Netflix on Linux using a special script that was not supported by Netflix. By 2014 Netflix officially supported Ubuntu. Today, the service "just works" for me in Chrome -- which is good because Netflix is the only thing standing between me and the outrageous cost of cable.

3 little things in Linux 4.10 that will make a big difference

Linux never sleeps. Linus Torvalds is already hard at work pulling together changes for the next version of the kernel (4.11). But with Linux 4.10 now out, three groups of changes are worth paying close attention to because they improve performance and enable feature sets that weren’t possible before on Linux.
Here’s a rundown of those changes to 4.10 and what they likely will mean for you, your cloud providers, and your Linux applications.

1. Virtualized GPUs

One class of hardware that’s always been difficult to emulate in virtual machines is GPUs. Typically, VMs provide their own custom video driver (slow), and graphics calls have to be translated (slow) back and forth between guest and host. The ideal solution would be to run the same graphics driver in a guest that you use on the host itself and have all the needed calls simply relayed back to the GPU.
There’s more here than being able to play, say, Battlefield 1 in a VM. Every resource provided by the GPU, including GPU-accelerated processing provided through libraries like CUDA, would be available to the VM as if it were running on regular, unvirtualized iron.
Intel introduced a set of processor extensions, called GVT-G, to enable these things, but only in Linux 4.10 does OS-level support finally show up. In addition to kernel-level support for this feature via KVM (KVMGT), Intel has contributed support for the Xen and QEMU hypervisors.
Direct GVT-G support inside the kernel means third-party products can leverage it without anything more than the “vanilla” kernel. It’s akin to how Docker turned a collection of native Linux features into a hugely successful devops solution; a big part of the success was because those features were available to most modern incarnations of Linux.

2. Better cache control technology

CPUs today are staggeringly fast. What’s slow is when they have to pull data from main memory, so crucial data is cached close to the CPU. That strategy that continues to this day, with ever-growing cache sizes. But at least some of the burden for cache management falls to the OS as well, and Linux 4.10 introduces some new techniques and tooling.
First is support for Intel Cache Allocation Technology (CAT), a feature available on Haswell-generation chip sets or later. With CAT, space in the L3 (and later L2) cache can be apportioned and reserved for specific tasks, so a given application’s cache isn’t flushed out by other applications. CAT also apparently provides some protection against cache-based timing attacks—no small consideration given that every nook and cranny of modern computing is being scrutinized as a possible attack vector.
Hand in hand with support for this feature is a new system tool, perf c2c. In systems with multiple sockets and nonuniform memory access (NUMA), threads running on different CPUs can make the case less efficient if they try to modify the same memory segments. The perf c2c tool helps tease out such performance issues, although like CAT it’s based on features provided specifically by Intel processors.

3. Writeback management

“Since the dawn of time, the way Linux synchronizes to disk the data written to memory by processes (aka. background writeback) has sucked,” writes KernelNewbies.org in its explanation of how Linux writeback management has been improved in 4.10. From now on, the I/O requests queue is monitored for the latency of its requests, and operations that cause longer latencies—heavy write operations in particular—can be throttled to allow other threads to have a chance.
In roughly the same vein, an experimental feature, off by default, provides a RAID5 writeback cache, so writes across multiple disks in a RAID5 array can be batched together. Another experimental feature, hybrid block polling (also off by default), provides a new way to poll devices that use a lot of throughput. Such polling helps improve performance, but if done too often it makes things worse; the new polling mechanisms are designed to ensure polling improves performance without driving up CPU usage.
This is all likely to have a major payoff with cloud computing instances that are optimized for heavy I/O. Amazon has several instance types in this class, and a kernel-level improvement that provides more balance between reads and writes, essentially for free, will be welcomed by them—and their industry rivals. This rising tide can’t help but lift all boats.

How to run commands at shutdown on Linux

Linux and Unix systems have long made it pretty easy to run a command on boot. Just add your command to /etc/rc.local and away you go. But as it turns out, running a command on shutdown is a little more complicated.
Why would you want to run a command as the computer shuts down? Perhaps you want to de-register a machine or service from a database. Maybe you want to copy data from a volatile storage system to a permanent location. Want your computer to post "#RIP me!" on its Twitter account before it shuts down?
I should clarify at this point that I'm talking about so-called "one-shot" commands as opposed to stopping a daemon. It's tempting to think of them the same way, though. If you're familiar with SysVinit you might think, "Oh, I'll just create a kill script." For example, /etc/rc.d/rc3.d/K99runmycommandatshutdownshould get invoked when your system exits runlevel 3. After all, that's how the scripts in /etc/init.d/ get stopped.
That's a great guess, but it turns out that it's wrong. SysVinit does not blindly run the kill scripts. Instead, it looks for (on Red Hat 6) /var/lock/subsys/service_name (where service_name here would be runmycommandatshutdown). So you have to get a little bit tricky and treat your script like a regular service. The script below gives an example:
#!/bin/sh
# chkconfig: 2345 20 80
# description: An example init script to run a command at shutdown
 
# runmycommandatshutdown runs a command at shutdown. Very creative.
 
LOCKFILE=/var/lock/subsys/
start(){
    # Touch our lock file so that stopping will work correctly
    touch ${LOCKFILE}
}
 
stop(){
# Remove our lock file
rm ${LOCKFILE}
# Run that command that we wanted to run
mycommand
}
 
case "$1" in
    start) start;;
    stop) stop;;
    *)
        echo $"Usage: $0 {start|stop}"
        exit 1
esac
exit 0
After putting that script in /etc/init.d/runmycommandatshutdown and enabling it with chkconfig on runmycommandatshutdown, your command will be run at shutdown.

systemd

All of that is great, but what if you're running a distribution of Linux that uses systemd instead of SysVinit? Turns out, it's much simpler with systemd. All you have to do is put your script in /usr/lib/systemd/system-shutdown/, which is handled by systemd-halt.service. Of course, if you need to manage dependencies in a particular order (e.g., you can't post a tweet if the network stack is down), then you can write a systemd service unit file. For example:
[Unit]
Description=Run mycommand at shutdown
Requires=network.target
DefaultDependencies=no
Before=shutdown.target reboot.target
 
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=mycommand
 
[Install]
WantedBy=multi-user.target