A glance at modern Information Security

Who hasn’t wondered if they have chosen the right career path? During some self analysis recently, one of the areas I keep considering for myself is the Information Security path. Especially the position of CISO – after all I have a fair share of security engineering (not commercial) experience and some sense for organizing things. A recent article [1] describing the discovery of new feature of prime numbers got me thinking what sort of implications this could have and what I would do as a response. This is, after all, a rather forgotten plane, of all the planes and vectors the CISOs currently have to deal with.

So, what are the IS challenges currently popular and how is this different?

Let’s put the organizational part aside, including policies and procecures or dealing with social engineering. In technology, without any particular order, there’s the software security, with the most prominent type of vulnerability being related to memory management. I’m sure everyone has heard or read about exploiting buffer overflow scenarios. That is most of the time a result of lack of control of data sizes and the memory allocated for them. So, why not simply control it? That is because the circumstances required for a buffer overflow potential are not easy to notice when analyzing the source code. Something that is hard to spot and yet can lead to someone taking over control of a system puts this type of threat very high on the risk map. The good news? There are also many approaches to address vulnerabilities of this type, such as ASLR[2] or code audits.

Another endangered area is again related to memory, but from the hardware perspective. Rowhammer is much less known, probably because a successful exploitation of this vulnerability “on a large scale” has never made it to the news – or even, above white paper [3] level. Nevertheless this threat is probably just as “red” on the risk map as the one above. Successful exploitation can lead to privilege escalation but the likelihood – well, that’s every PC and server memory currently used, including the new DDR4 [4]. There are hardware responses to this coming in new CPU instruction sets [5], however they are flawed in the typical way security solutions do – there’s a negative impact on speed/latency. The speed of computer memory has always been a bottleneck so in many implementations such solutions will not be acceptable.

Speaking of hammers, there’s always the “hammer way” (as opposed to the clean-cut “scalpel way”) of exploiting a vulnerability. I am referring to the next security area commonly referred to as Denial of Service. The point of DoS is “if you cannot take over the control, at least stop the service” by flooding with spurious service requests. The financial implications can be quite severe. There are typical DoS flaws that can be addressed with smart programming, but also attacks based on distribution of source of requests (DDoS), against which there is no efficient solution, maybe aside unreasonably increasing the infrastructure footprint and some smart collaboration with ISPs.

Last plane I wanted to cover here is cryptography. This probably is the easiest area – mathematicians define ciphers which can be used to encrypt communication. Those are coded in functions which then are packet into security libraries to be commonly used by programmers as required. Since proving a cipher to have weaknesses can take time after initial announcement, every now and then the IT world is rocked by a cryptography based vulnerability such as Heartbleed, weaknesses around MD5, or previously RC4 and so on. Nevertheless, the ciphers are in vast majority based all on the prime numbers theorem – or more importantly, that it is difficult to timely check if the exponent of a number is prime or not, using currently available efficiency of computation.

In reference to [1], the typical weaknesses of ciphers, aside from quantum computing, have gained a new enemy. If there is a higher chance that for a prime number, the next digit in the next number is known by likelihood of 65%, then that should make “guessing” the number much easier. For example, one could assign priorities for the numbers when running a brute force check, thus speeding up the cracking process. This likelihood then is a clue, which can be used in a similar way you’d be cracking a password hash with tools like John The Ripper – by building a set of rules such as “passwords with numbers usually are constructed with the letters preceeding the numbers, i.e. password ‘Forthequeen96′”.

I hope I am wrong about this, however from the theory it appears that with this discovery[1], all modern cryptography got another impulse to find a new, quantum-computing-proof cipher. If I am right, this discovery just makes all cryptography weaker, meaning – if you needed a super computer to crack encrypted communication, now you’d need less.

1. https://www.quantamagazine.org/20160313-mathematicians-discover-prime-conspiracy/
2. https://en.wikipedia.org/wiki/Address_space_layout_randomization
3. http://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf
4. http://arstechnica.com/security/2016/03/once-thought-safe-ddr4-memory-shown-to-be-vulnerable-to-rowhammer/
5. http://blogs.cisco.com/security/mitigations-available-for-the-dram-row-hammer-vulnerability

Simple SPAM solution with auto-learn and recovery options

Perhaps you’ve come across those fancy, branded solutions that allow quarantining mail messages, delivery on demand (when you click a link of a suspected message), filtering incoming messages and so on. Upon some reseach it appears that these come with a price tag of over $60,000 in typical configurations. Expensive, huh? This got me thinking how hard would it be to deliver such functionality using open source solutions that come under the GPL license. Well, I’m a bit rusty now but every now and then try to do something like this, for mental hygiene. And it did not take me more than a couple of hours so it is simple and minimalistic.

Side note: generally I think off the shelf solutions most of the time lack the flexibility while when it comes to their advantages, more often than not it is the packaging, branding and some poor fellow on the on-call support ready to answer basic questions. On the other hand, with the right team you can do wonders using GPL tools.

For the purpose of presenting the system in this article, I will assume you were able to set up your server(s) with an MTA, clamav and spamassassin using whatever configuration you need to efficiently deliver email to your customers – internal or external. For me – I use exim, spamassassin and clamav, but you could do any other MTA as you prefer.

Usually an MTA and spamassassin allow spam filtering by default using a collection of rules in Perl. On top of that, optionally you can enable an AI based on the Bayesian learning algorithm. Such setup all together offers the following functionality:

  • Unsolicited mail gets filtered out to some extent by the Perl rules which look for certain combinations of words and score penalty points.
  • Other messages can be learned by the algorithm to allow improving the efficiency of the filter

Something is missing, right? For example:

  • You need manual intervention in case the filters missed a spam message
  • Nothing handles false-positive hits

This guide will explain the most minimalistic method I could think of for delivering this missing functionality using open source solutions. Lets start with the easiest one – auto-learn without admin intervention.

Enabling auto-learn per maildir folder

This will allow me to further train the filter to reach decent values. When the filter is well trained, I was able to achieve efficiency of 100% for many months, while having processed an average count of up to 1000 messages per day.

In this setup, I use maildir format for storage so that each IMAP folder is a filesystem folder and each message is a file. This allows me to announce and create for every user I support a special folder called SPAM directly inside the INBOX. As a part of the design, I can then ask users to move all files they consider spam to that folder and have a script that will periodically (use your favorite implementation of cron, I use cronie) scan these folders for new messages, learn them as spam and delete.

Additional ideas – you can make this smarter by:
1. Adding more controls to prevent mistaken drag-and-drop-and-forget.
2. Move instead of delete, to some quarantine folder.

A script performing such activity can look like this:

#!/bin/zsh
#
# Learn messages thrown to the SPAM folder as SPAM and delete them
# Script by Patryk Rzadzinski patryk@rzski.com
#

# Configurables
maildir="put path to your maildir here";
spamdir=".INBOX.SPAM";
log="/usr/bin/logger -t spamd";

# For whom should this run? Fill this the array with all your users
mailers=(Alice Bob Charlie)

for user in "${mailers[@]}"; do
	if [ -d "$maildir/$user/$spamdir" ]; then
		for spam in $(find "$maildir/$user/$spamdir" -type f ! -name "dovecot*" ! -name "maildirfolder"); do
			$(which sa-learn) --spam "${spam}" 2>&1 >>/dev/null && eval "${log}" "Learned ${spam} message as spam.";
			rm -f "${spam}" && eval "${log}" "Deleted ${spam}.";
		done;
	fi;
done;

Since this is a new thing, I have enabled simple logging using the logger tool available on any Linux system. The tag used here is called “spamd” which is arbitrary, however I have previously configured my syslog-ng to catch all system messages using this tag and keep 30 days of files in a specific folder. This might come in handy for debugging, but is not the subject of this article.

Naturally, we need a crontab entry for this to work. How often should this script run? I think it depends on your load and amount of users. For home solutions you can have this run every minute, with the benefit of acting quickly and downside of someone moving a file by mistake, since there will be little time to react. I would recommend to start with hourly runs and then fine-tune as required.

Allowing the user to deal with false positives

One thing I really wanted at some point is to make sure an important email does not simply get filtered out. At the same time, I wanted vast majority of the spam that reaches my mailbox to simply disappear. I designed the following ompromise. A weekly digest of all the messages that were not delivered to me because the system considered them as spam (in other words, they scored a sufficient amount of penalty points) plus an option to recover each such message with 1-2 clicks. To achieve that, I wrote the following script that I have cron run for me once a week to generate a report about messages that were not delivered to me but thrown into the spam directory instead.

#!/bin/zsh
#
rcpt="your@email.here";
spamdir="path to your spam folder";

cd $spamdir;
spam_report="${spamdir}/workdir/temp;
rm -f ${spam_report};
n="<br>";
recovery="<a href=\"mailto:RecoverSpam@yourdomain.com?subject=";

# Removal of the base64 encoded text
Mail_decode () {
        decoded="$(echo ${1} | cut -d' ' -f2)";
        if [[ "${decoded}" =~ "UTF-8" ]]; then
                stripped="$(echo "${decoded}" | sed -e 's:=?UTF-8?[BQ,]?::g' -e 's:?=::')";
                decoded="$(echo "${stripped}" | base64 -d)";
        fi;
        printf "${decoded}$n";
}

# Generate labels from spam message headers
for spam in *; do
        if [[ ! -d ${spam} ]] && (((($(date +%s) - $(/usr/bin/stat -c %Y ${spam}))) < 604800)); then
                (printf "Message ID: ${spam}$n";
                printf "RECV: $(grep -i received: ${spam})$n";
                printf "$(grep -i from: ${spam} | tr -d '<>')$n";
                printf "$(grep -i to: ${spam} | tr -d '<>')$n";
                printf "Subject: $(Mail_decode "$(grep -i subject: ${spam})")";
                printf "${recovery}${spam}\">RECOVER$n";
                printf "=========================================$n";) >> ${spam_report};
        fi;
done;


if [[ -z "${spam_report}" ]] ; then
        echo "no new messages" | /usr/bin/mailx -s "spam-digest: no new messages" ${rcpt};
else
    	cat "${spam_report}" | /usr/bin/mailx -a 'Content-Type: text/html' -s "spam-digest" ${rcpt};
fi;

This script goes over the messages and in case they came over the last 7 days, it collects the information that would allow me to distinguish them from actual spam and send over a summary. It also adds a HTML link which allows me to recover such messages. Last but not least, I change the MIME type of the message in the report to HTML to allow processing of the a href part by an MUA – this allows the solution to work from roundcube and mail clients on my phone or laptop. I think it will also work with pine or mutt.

In any case, the script results in the following report:

ID: q1aYxhW-74796
Envelope: (envelope-from foo@bar.com)
From: Contact
To: info infoaabbrzadzins.info
Subject: Re: Information
RECOVER
=========================================
ID: q1aYzMJ-74797
Envelope: (envelope-from x@y.net)
From: =?UTF-8?B?SmVycm9sZCBTb3Rv?=
To: =?UTF-8?B?cGF0cnlr?= Reply-To: =?UTF-8?B?cGF0cnlr?=
Subject: Free watches
RECOVER
=========================================
ID: q1aYzTR-74849
Envelope: (envelope-from z@asd.org)
From: someone trying to send spam
To: =?UTF-8?B?cGF0cnlr?= Reply-To: =?UTF-8?B?cGF0cnlr?=
Subject: Inheritance
RECOVER
=========================================
(...)

This way at a glance I know what hit the spam bin. Each “RECOVER” text is actually a HTML link pointing at an email address using the mailto: directive. Such directive on most systems is configured to open a new message by the MUA the user (or the corporation) has chosen. It takes additional parameters. In my example, I set the Subject to the value equal the file name of the spam message I want to recover. When I click (or tap) on RECOVER under the message that I consider not to be spam, it will open a new “compose email” window with pre-defined subject and send it to a special mail address configured on my system, which will then trigger a script re-delivering the falsely-spambinned message to the original recipient.

How to configure exim to understand and process such messages? This can be done on ACL level, using the “run” command, but I keep such functionality in the system filter. I use the following logic: when a message comes from an address that has authenticated (make sure you deny non-authenticated submissions), to the pre-defined RecoverSpam recipient, execute a shell script with the following 2 arguments: name_of_file (we pass that in the subject, right) and original sender (so if I want something recovered, the system sents back the message to me – this is in envelope-from header field).

Here’s an example of exim system filter configuration:

# Exim filter
logfile /var/log/exim/filter.log

if "$h_to:" contains RecoverSpam@rzski.com then
	pipe "/path/to/spamrecovery.sh \"$h_subject:\" \"$sender_address\""
	finish
endif

# regular spam action depends on the custom header line injected by the ACL
if $message_headers: contains "X-ACL-Warn: message looks like spam" then
	save	/path/to/spamdir 0640
	finish
endif

Very simple and minimalistic, which was the choice set in the beginning of this article.

If you have a CISO looking at all this, then you might want to secure this a bit more. The easiest would be adding more checks. One nice idea that comes to mind is adding soft tokens which could come from any system daemon based on, say, normalized 8-character strings from /dev/urandom - these can be injected in headers of the messages, you can define whatever you want and then have the script check their validity when processing the message for recovery. Be careful in larger installations, where you might run out of entropy. In such cases you could simply install a larger token issuing daemon.

The last element of this system is the script that would process the recoveries. Again, this is the simpliest approach which can be nicely expanded with additional security and sanity checks as needed.

#!/bin/zsh
#
# Recover files from spam, by Patryk Rzadzinski patryk@rzski.com

# Configurables
spamdir="/path/to/spam";
mailer="/usr/sbin/exim";
log="/usr/bin/logger -t spamd";

# Change this to match your spam tag or token
clean="/bin/sed -i '/X-ACL-Warn: message looks like spam/d'";

userid="${2%\@*}";

# Conditioning - tighten as required.
# Replace the last check with whatever authentication method you use - this one is for passwd
if [ ! -z "$1" ] && [ "$#" -eq 2 ] && [[ "${userid}" =~ ^[a-zA-Z]{4,}$ ]] && id -u "${userid}" >/dev/null 2>&1; then
	# remove the spam identificator from the message
	eval "${clean}" "${spamdir}/${1}" && eval "${log}" "Removed spam tag from message $1.";

	# teach spamd that this message is not spam
	$(which sa-learn) --ham "${spamdir}/$1" 2>&1 >> /dev/null && eval "${log}" "Spamd learned message $1 as ham.";

	# re-deliver
	cat "${spamdir}/${1}" | eval "${mailer}" "${2}" && eval "${log}" "Re-delivered message $1 to recipient";

	# delete message from spamdir
	rm -f "/var/spool/spam/$1" && eval "${log}" "Removed message file $1 from ${spamdir}.";

	# return 0 to confirm script completed successfully
	exit 0;
else
	printf "Arguments passed to spam recovery are not OK - 1: $1 2: $2 3: userid: ${userid}.\n";
	exit 1;
fi;

As before, there is again space for improvement. For example, instead of a single RECOVER “button”, you could actually make it optional depending on arguments, and offer the user multiple choice of actions: delete, deliver, learn as HAM and deliver, etc. In my case, RECOVER means (in this order): remove the spam tag AND learn as HAM AND re-deliver to original recipient AND remove from the spamdir. This is simple but can be improved for flexibility. Even better if you expand the mail-digest script with some branding to make your users happy and give them the “expensive solution” feel, add some graphics.

And that’s all! Note that you could move this whole logic to PHP and process it over the web, but it does then make it much less minimalistic and you end up securing PHP, which is always fun and full of pitfalls.

Another thing worth doing is moving the recovery process to a Docker container running only exim and sharing the spamdir folder, which would be a nice piece of sandboxing. I did not do this only because my VPS is low on disk space, but this solution should make CISOs happy and also offer some peace of mind.

Comment on vision & strategy in light of the Apple vs FBI case

Originally I published this on LinkedIn, here’s the comment.

The [lost] ability to define long term [IT] strategy

One thing I absolutely love about the US presidential elections is that the candidates are actually challenged to provide opinions on theoretic subjects that actually matter. This allows getting a decent insight into their ability to define long term strategies often on subjects which are very abstract and require a certain type of mental discipline and the ability to imagine multiple levels of implications. The most recent example is the case of the demand against Apple to introduce backdoors to their software.

Long story short, those candidates remaining in the race, with an exception from the Libertarian Party, agree that the government needs to have some sort of a “master key” in order to tap into communications where required [1]. As commenters [2] already noticed – such requirement leads to a number of issues. First of all, the next vendor might not be US-based and thus the whole efort will be futile. Second, currently the suspects have used software as provided by the vendor, but what stops them from creating a cryptographic application that would encrypt the communication for them? Nevertheless, the presidential candidates make it quite clear in their speaches: security above liberty, because “something must be done”.

So that leads me to the long term part. By the rule of induction – should cryptography be banned alltogether? No ciphers, all communication in the open? I think it is safe to say it is clear such approach will just not work. In vast majority of cases cryptography is there to secure our information, payment card information inclusive.

So the residual facts that remain seem to be:

  • Backdoors or Master Keys are not the answer – they do not solve the problem, but are probably likely to win some votes. And they end up opening a whole new bag of problems
  • Cryptography is here to stay.

But the long-term question stands: what should we be doing to avoid these situations? And how does all this tie to IT? In my line of work, I often lead technical incident response teams challenged to find a solution to an actual problem. One thing I have learned over the time is that sometimes having the best minds in the team is simply not enough to solve a case. Sometimes you need to take a few steps back and realize the root cause is outside of the picture everyone is focusing on. Sometimes, a long term strategy of doing (or not) certain things in a certain (standardization!) way offers the solution – the catch is, it might seem completely unrelated!

Does that mean the FBI should just allow San Bernardino to happen? Of course not, simply the root cause is completely outside of the scope of the discussion and cryptography has nothing to do with it. The problem will not be solved in this area, but who will be the leader that can still notice that and can democratic elections still give us such leaders?

  1. http://windowsitpro.com/security/where-do-presidential-candidates-stand-encryption
  2. http://politics.slashdot.org/story/16/02/19/0019218/where-do-the-presidential-candidates-stand-on-encryption

Jack and ALSA: sound through multiple devices

I have designed a simple audio-video solution for my entertainment room at home, where I have a:

  • High quality headphones connected to a decent pre/post amplifier which connects to the computer through USB (and registers as USB audio device with my Windows10 and Gentoo linux).
  • Not so high (but not bad at all!) on-board sound chip provided with my motherboard – this one I’d like to connect to my TV.

The goal is to have sound via the on-board card to the TV for less demanding tasks, such as watching movies and have sound go via USB to the amp while when I listen to music.

After some reseach it appears ALSA on its own cannot do this job, but Jack can be used. At first I did not like the overhead, but quickly realised it is minimal. The below howto is based on JACK. On the 4.3 kernel (no -rt stuff since most of the good work for real time processing is long in the mainline kernel), it works well.

First of all it is good to ensure all packages are build with jack support (and USE flags in Gentoo make this a trivial task) and install the Jack daemon (jackd). Now, for some reason it is a daemon that does not have a startup script, so you’ll need to figure out how you want to start it – for me it ended up in the xfce session controller (so it starts with my user ID when I log in) – but you can write a script too, just ensure it depends on alsasound.

There is a gotcha here – for running the jackd without root privileges, you need to edit limtis.conf to give the audio group rt permissions:

Code:
root@ryba ~ # grep audio /etc/security/limits.conf
@audio      –  rtprio          99
@audio      –  memlock      unlimited
@audio      –  nice      -10


And of course, have your user in the audio group. As always with groups – for them to take effect, all sessions must be finished and re-started.

Now, trick is you still need alsasound daemon running and a proper /etc/asound.conf which mixes multiple devices. Basically the following config works for the first two sound cards, because it uses the indexes instead of the device names (you get those from aplay -l. An exception is with the last config section, because it is apparently a dummy control, alsa needs it):

Code:
root@ryba ~ # cat /etc/asound.conf
pcm.both {
type route;
slave.pcm {
type multi;
slaves.a.pcm “plughw:0,0”
slaves.b.pcm “plughw:1,0”
slaves.a.channels 2;
slaves.b.channels 2;
bindings.0.slave a;
bindings.0.channel 0;
bindings.1.slave a;
bindings.1.channel 1;
bindings.2.slave b;
bindings.2.channel 0;
bindings.3.slave b;
bindings.3.channel 1;
}
ttable.0.0 1;
ttable.1.1 1;
ttable.0.2 1;
ttable.1.3 1;
}pcm.jack {
type jack
playback_ports {
0 system:playback_1
1 system:playback_2
}
capture_ports {
0 system:capture_1
1 system:capture_2
}
}

pcm.!default {
type plug;
slave.pcm “jack”;
}

ctl.!default {
type hw;
card 0;
}

Next, I need to tell jack to start WITH this config too, so my line that starts it under xfce session starter is as follows:

Code:
/usr/bin/jackd -d alsa -P both

Explanation: run jackd using -Device alsa and -Playback both – because that’s how my device is named in /etc/asound.conf: pcm.both

Final step is to tell your software, such as mplayer, moc music on console, or whatever else you use, to switch from alsa to jack – that’s fairly easy, just rtfm, however you can also leave it as alsa, since this one is routed into the jack as well, making jackd the catch-all solution. It is quite important to do so because if you use Firefox and would like sound from flash and HTML5, FF will first try pulseaudio and if it is not present (like in my case, because I don’t want that overhead), then it will try alsa – but not jack and there is no config handle for that. So the above config basically uses jackd every time something tries to use alsa.

Needed: alsa-plugins, jack-audio-connection-kit
Not needed: alsa_out, qjackctl (the gui thing), gst-plugins-alsa, pulseaudio

Shell shock – impact analysis

The article below is my publication for the Polish IT magazine IT Wiz.

Cisza po burzy?

Wraz z nieustającym rozwojem oprogramowania, rozwija się również jego bezpieczeństwo. Luki o charakterze krytycznym nie pojawiają się już tak często, jak miało to miejsce chociażby dekadę temu. Idąc tym tokiem, dzisiaj administratorzy systemów mogli poniekąd oczekiwać spokojnego zakończenia bieżącego roku – wszak jeszcze niedawno światem wstrząsnął błąd, który dorobił się swojego kryptonimu – Heartbleed. Na krytyczne znaczenie rzeczonego błędu miały wpływ dwa czynniki: po pierwsze, dotyczyło szeroko stosowanej konfiguracji w warstwie szyfrowania, a więc czegoś, co de facto nie służy niczemu innemu – jest narzutem bezpieczeństwa. Po drugie, powszechność implementacji, gdyż dotyczył jednej z popularniejszych wersji szeroko stosowanej biblioteki OpenSSL.

Niestety (albo wręcz przeciwnie), spokojnego końca roku się nie doczekamy. Oto Stephane Chazelas, francuski inżynier informatyki odkrył lukę, która może nie podlega pierwszemu czynnikowi, ale za to można szacować dużo większą podatność, nie mówiąc już o łatwości eksploatacji. Luka dotyczy tym razem powłoki systemowej Bash we wszystkich, aktywnie rozwijanych jej gałęziach. Problem w tym, że właściwie każda dystrybucja systemu Linux stosuje Bash jako domyślną powłokę systemową od wielu lat, w tym urządzenia typu embedded tudzież takie, gdzie Linuksa byśmy się nawet nie spodziewali. O ile w przypadku Heartbleed można było wykraść dane poprzez celowy atak, tak tutaj można spodziewać się powstania skryptów masowo eksploatujących tę podatność z automatu, tworząc pokaźnych rozmiarów botnety. Warto dodać, że botnety na serwerach są dużo potężniejsze z racji często lepszego dostępu do sieci. W tym temacie szeroko już alarmowały media nie-informatyczne, jak chociażby Reuters.

Na czym to polega

Bash, podobnie jak inne powłoki, pozwala na definicje zmiennych środowiskowych. Zmienne takie mogą zawierać definicje funkcji jeśli zaczynają się od ciągu znaków ‘() {‘. Jest to szczególnie przydatne w przypadku wprowadzania funkcji dla innych instancji powłoki w danym systemie, jednak w tym wypadku wiąże sie to z zagrożeniem. Problem w tym, że interpretacja kodu nie kończy się wraz z ostatnią klamrą definicji. Zatem, do skutecznego wykorzystania podatności wystarczy nadpisanie rozpoznawalnej szerzej zmiennej z funkcją poprzez dodanie kodu, który wykona się z takimi uprawnieniami, z jakimi działa instancja powłoki. Błąd ten jest na tyle krytyczny, że dorobił się swojego kryptonimu – Shellshock.

Przecież nie udostępniam shella!

Skala problemu kryje się w mnogości zastosowań. W przypadku Heartbleed było tego sporo – należało namierzyć każdą implementację biblioteki OpenSSL poprzez identyfikację wszelakich urządzeń z możliwością nawiązywania połączeń szyfrowanych. I tak, do typowych ataków na serwery doszły wszelkie klienty VPN, a więc nawet domowe routery i wiele innych urządzeń.

W tym wypadku pracy w namierzaniu ewentualnych wektorów ataku będzie znacznie więcej, ponieważ podatne będzie każde wykorzystanie Basha wiążące się z pracą w środowisku o zdefiniowanych zmiennych z funkcjami. Zatem nie chodzi tylko systemy wielu użytkowników. Podatne są również systemy, gdzie użytkownik nie ma możliwości stworzenia nowych zmiennych środowiskowych! Ataki można przeprowadzać więc pośrednio – poprzez skrypty CGI chociażby, a więc praktycznie każdy serwer http generujący dynamiczne strony, na przykład w PHP. Do tego dochodzą środowiska dla skryptów klientów DHCP, opcja AcceptEnv w OpenSSH tudzież SSH_ORIGINAL_COMMAND a także wszelakie skrypty, które eksportują definicje funkcji, szczególnie z ustawionym SUID.

Linux & ALSA on integrated sound card

I am moving various publications and how-to documents to this blog and found this one. In fact, many of these steps are still valid when troubleshooting issues, however the state of ALSA in kernel 4.4 is much better and in vast majority of cases, sound should work on new laptops right away.

To setup ALSA on a laptops, notebooks and many modern motehrboards which come with an integrated soundcard, you need to take the following steps.

1. Collect information about your hardware device using lspci -v (you can grep for audio).

00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 02)
Subsystem: Micro-Star International Co., Ltd. Unknown device 040d

Whats important here is the Subsystem. Write it down.

2. Obtain the chipset info from your running configuration.

$ cat /proc/asound/pcm
00-06: Si3054 Modem : Si3054 Modem : playback 1 : capture 1
00-02: ALC883 Analog : ALC883 Analog : capture 2
00-01: ALC883 Digital : ALC883 Digital : playback 1
00-00: ALC883 Analog : ALC883 Analog : playback 1 : capture 2

So, in my case it is a ALC883 chipset and MSI subsystem. Of course, snd-hda-intel driver must be loaded first. But to get proper sound output, you have to add an appropriate model line to /etc/modules.d/alsa where the module gets configured at load (for example at boot time).

3. Checking for the right model to use with the module loading script

Time to read the documentation that comes with the Linux kernel. To do this, open /usr/src/linux/Documentation/sound/alsa/ALSA-Configuration.txt with your favourite text editor. Find the occurence of your chipset and browse the list of options and subsystems. In my case:

ALC883/888
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack digital with SPDIF I/O
3stack-6ch 3-jack 6-channel
3stack-6ch-dig 3-jack 6-channel with SPDIF I/O
6stack-dig-demo 6-jack digital for Intel demo board
acer Acer laptops (Travelmate 3012WTMi, Aspire 5600, etc)
medion Medion Laptops
targa-dig Targa/MSI
targa-2ch-dig Targs/MSI with 2-channel
laptop-eapd 3-jack with SPDIF I/O and EAPD (Clevo M540JE, M550JE)
auto auto-config reading BIOS (default)

I find my subsystem (MSI) on the list. It is next to the targa-dig option, so i have to use it in my configuration. This shows that for me the appropriate /etc/modules.d/alsa configuration line is:

options snd_hda_intel model=targa-dig

Put your line in the config file and then have the module re-load (if you did not compile in module re-loading in the kernel, you need to reboot).

Mencoder: create and encode movies from files (frag movies)

Introduction

This howto provides some usual information on creating frag movies (or any other videos) with a powerful command line tool called mencoder. Mencoder is a part of mplayer and is used to deal with multimedia streams.

MPlayer

MPlayer is a movie player for Linux (runs on many other platforms and CPU architectures, see the documentation). It plays most MPEG/VOB, AVI, ASF/WMA/WMV, RM, QT/MOV/MP4, Ogg/OGM, MKV, VIVO, FLI, NuppelVideo, yuv4mpeg, FILM and RoQ files, supported by many native and binary codecs. You can watch Video CD, SVCD, DVD, 3ivx, DivX 3/4/5 and even WMV movies, too.

mencoder

mencoder (MPlayer’s Movie Encoder) is a simple movie encoder, designed to encode MPlayer-playable movies (see above) to other MPlayer-playable formats. It encodes to MPEG-4 (DivX/Xvid), one of the libavcodec codecs and PCM/MP3/VBRMP3 audio in 1, 2 or 3 passes. Furthermore it has stream copying abilities, a powerful filter system (crop, expand, flip, postprocess, rotate, scale, noise, RGB/YUV conversion) and more.

Building a video from screenshots

If you are using quake on a UNIX based operating system and ezQuake 1.9 or older, then most probably you can capture screenshots at most (unlike windows, where you can capture and encode video on the fly) with mencoder it is quite easy to compile multiple screenshots into an encoded or raw video file. To compile all .jpg files in the current directory and use 30 files per second:

mencoder "mf://*.jpg" -mf fps=30 -o output.avi

To have the motion unchanged, the fps setting must match the one used during capturing. Changing this value may result in slower or faster playback later. The speed of the output video file can be also adjusted with the -speed parameter. If your capture resulted in getting tga files instead of jpg just replace mf://*.jpg with mf://*.tga.

Building a video from other video files

You can do the same with multiple video files, compiling them into one, and possibly encoding. Note though that if the video files vary in resolution, the output video’s resolution will be the largest from the input videos. The other files will be decorated with black borders around them.

To compile all .avi files in the current directory:

mencoder *.avi -o output.avi

Note that you might have to add -nosound to the command line due to various problems.

Encoding

There are many codecs available to use with mencoder. Each one has it’s own specific encoding options. Short description of the most popular codecs follows. To see what video and audio codecs are available to you, issue the following command:

mencoder -ovc help -oac help

You will be shown a list of possible codecs for audio and video. Selecting a codec is done by -ovc <vidcodec> and -oac <audiocodec> in the command line, where ovc stands for output video codec and oac is output audio codec. The most important encoding settings are bitrate, pass, speed and aspect.

xvid codec

To encode a video file with the xvid codec (recommended for Frag Of The Week), the -ovc parameter has to be set to xvid and -xvidencopts should follow to enable custom encoding options. Possible encoding options are:

pass=<1|2> - specify the pass in two pass mode
bitrate=<x> - specify the bitrate of the file being encoded (the more, the better quality and bigger file size)

Example:

mencoder input.avi -o output.avi -ovc xvid -xvidencopts bitrate=3000

For best results use the two pass mode. Some tasks, such as encoding a live feed in real time, TV-capture or a security camera allow for single pass mode only. In any other case it is recommended to use two pass mode. It is often necessary to resize the final movie clip to the desired resolution and use gamma settings other than the default.
Example of encoding with 2-pass mode, resizing to 320×240 and altering the gamma level to 1.5:

mencoder "mf://*.tga" -mf fps=25 -o /dev/null -ovc xvid -xvidencopts pass=1:bitrate=3000 -vf scale=320:240,eq2=1.5
mencoder "mf://*.tga" -mf fps=25 -o output.avi -ovc xvid -xvidencopts pass=2:bitrate=3000 -vf scale=320:240,eq2=1.5

The bitrate setting in the first pass is not really needed and mencoder can ignore it.
Mencoder uses the data gathered from the first pass via the divx2pass.log file (so stay in the same dir).

x264 codec

This codec can give very good quality while the file size remains low. It is rapidly gaining popularity among groups releasing movies. The use is very similar to xvid. -ovc has to be set to x264 and the parameter -x264encopts allows for further customization. The nr encoding parameter stands for noise reduction, which might be useful for bad quality videos. Example:

mencoder input.avi -o output.avi -ovc x264 -x264encopts bitrate=3000 pass=1 nr=2000

MPEG codec

The MPEG muxer can generate 5 types of streams, each of which has reasonable default parameters that the user can override. Generally, when generating MPEG files, it is advisable to disable mencoder’s frame-skip code (see -noskip and -mc). Example:

mencoder input.avi -o output.avi -ovc lavc -mpegopts format=mpeg2:tsaf:vbitrate=8000 -nosound

lavc filter

This filter is supposed to give best quality and relatively small files, according to MPlayer’s manual.The following example shows one way of encoding a file with lavc. Example:

mencoder "mf://*.jpg" -mf fps=30 -o output.avi -ovc lavc -lavcopts vcodec=mpeg4

Another example:

mencoder input.avi -o output.avi -oac copy -ovc lavc -lavcopts vcodec=mpeg4:mbd=1:vbitrate=2800

Using audio

At the current stage of ezQuake’s development, there is no possibility to record quake sounds in Linux during capturing. It is still possible to use an audio file as soundtrack. Furthermore, it is possible to encode the audio track (-oac). The -noaudio parameter can be used to remove audio from a video file or not to use audio when compiling one. However, an audio stream can be included thanks to the -audiofile parameter. In this next example, mencoder adds an audio track from a .wav file and encodes it to mp3.

mencoder input.avi -o output.avi -ovc copy -oac mp3lame -audiofile soundtrack.wav

It is also possible to add an audio stream which is already encoded:

mencoder input.avi -o output.avi -ovc copy -oac copy -audiofile soundtrack.mp3

It is a good idea to prepare the audio track beforehand. There is a great tool with GUI to do that under Linux, called Audacity. The program is very easy to use thanks to an intuitive interface. It allows mixing, cutting and rearranging of audio files and can export the results to mp3.

POSIX threads, C implementation in Linux

In an effort to move my work to a single place, I’m going to link my translations from English to Polish performed for the Gentoo distribution. The author of the original text is Daniel Robbins. I find these documents well written and helpful when learning thread management not only if you want to use them in your software, but also learn how the OS is managing the threads (IT Operations, troubleshooting issues, improving system performance, debugging).

Original work (now hosted on IBM website).

My translations: POSIX Threads part 1, part 2 and part 3.