Windows 10 Fall Creators Update: blank screen in HP Z1 Workstation

Well, I just got automatically upgraded to Windows Fall Creators Update on my HP Z1 Worksation and as you can guess, I got exactly the same problem when the machine screen turned black and the machine went into some kind of zombie state.

Since this already happened to me during the Windows 10 Creators Update I knew what to do 🙂 My recommended steps to solve the problem are the following:

0. Like before doing any major upgrade operation, perform backup of all your data and make sure you will not suffer in case something goes wrong and upgrade fails

1. Get IP address of your Z1 machine by launching cmd.exe and running ipconfig in the console

2. Configure Remote Desktop access to your Z1 machine from another device. Make sure you can connect to Z1 from another machine before you start upgrade to Fall Creators Update

3. Make sure there is no external drive connected to Z1. I had a problem when upgrading to Fall Creator Update when an SD card was connected to the device – everything just hung and machine did not reboot (the LED that indicates disk activity was inactive, and machine was in zombie state forever, once I removed the SD card the machine automatically rebooted and upgrade continued)

4. Cross the fingers and start the upgrade

5. As expected, the screen will switch off, be patient and fearless 🙂

6. Periodically go to the device and monitor the LED which indicates the hard disk activity (right corner, top). If it is working, it means upgrade is going on

7. From another machine open cmd.exe and try to ping Z1 ip address using the command

ping [here comes the ip address you obtained in step#1]

8. Check ping results. If you get ping response continue further:

9. Connect to machine via RDP

10. Go to device manager and locate your video driver

11. Click remove device and choose also to remove the software driver

12. Perform reboot of the machine by executing the command:

shutdown -r -t 1

13. After the reboot the display will work, the resolution will be very poor, but after a few minutes Windows 10 will reconfigure the video driver and it will work correctly

P.S. If you encounter problem at step#8 (I actually did encounter it) monitor the LED of the disk to find out when the upgrade is finished and machine is idle. If you determine that machine is idle or you really can’t determine anything wait 2 hours and reboot the machine. After the reboot the ping will indicate that machine is online, so it’s time to connect via RDP and fix the problem (step#9).

Windows 10 Creators Update: blank screen in HP Z1 Workstation

Today I was one of those users of Windows 10 that received Windows 10 Creators Update. In my case it was rather a bumpy road because at the end of the installation I hit a common problem of blank screen in my computer: HP Z1 Workstation (first generation).

There were a couple of issues discovered during upgrade process:

1. The overall installation took a lot of time and for me it was stuck at 24 % for approx 1 hour. This is quite strange considering that Windows 10 is installed on SSD drive

2. As the progress went slowly up (to 30%) the machine did a couple of reboots and I ended up in a blank screen situation

I tried to reboot it via hard reset a couple of times but it did not get better. After checking if I can still ping the machine while in such state I was surprised to discover that I can ping it and connect via remote desktop to the machine.

At this point it means that the blank screen problem is caused by the driver of my video card: Nvidia Quadro 1000m. A quick check of the version reveals that I have the latest version.

After some thinking I decided to use an old school trick – going to device manager via rdp and choosing to remove the video adapter and doing a reboot. And voila! After the reboot the screen is back to normal!

It is a pity that issues like this may mislead you and start blaming the Creators Update installer while the problem is really residing in a video card drivers. From Microsoft perspective, I think it could be nice to add a video driver reset right after installation – user will see some flickering and shitty quality of the screen, but at least it will not mislead user into going to safe mode and reinstalling Windows from scratch.

Bluetooth headphones headache in Linux: a trick with

If you ever used your bluetooth headphones in Linux you most likely noticed an annoying issue that the headphones are detected successfully, but the sound still not working correctly.

This problem has been discussed numerous times and I found recently a very useful post that helps to address the problem.

It seems like the solution is pretty easy – simply download the script from github and execute it using command line like so:


I noticed, however, that this does not always help and a2dp needs to be executed one more time for the headphones to work. In order to address the problem one may simply create a tiny script that will execute ./ until it resolves the problem:


keyword='"Enjoy" the HiFi stereo music :)'

while :
  result=$(./ | grep "$keyword")
  if [ "$keyword" == "$result" ]
    echo "correct - $result"
  echo "wrong - $result"

Update: I’ve created a small project on github to host this script:

Windows 8.1 and Windows Server 2012 R2 support in Microsoft Hardware Certification Kit

At work I use Microsoft HCK as a framework for testing my projects which involves testing a few drivers and NT services. This is working great for us since we can do our custom test jobs as well as use the ones from WHQL tests. We invested a lot of time into creating different tests jobs and tools and our testing ecosystem is based on that. Using HCK we can test dozens of machines covering major operation systems: XP, Vista, 7, 8, Server 2003, Server 2008 and Server 2012.

Recently Microsoft released Windows 8.1 and Server 2012 R2 operating systems which do not seem to be compatible with the version of HCK we are using (HCK for Windows 8). If you try to install HCK client on 8.1 or Server 2012 R2 you will see the following error message: “Windows Hardware Certification Kit Client Setup wizard ended prematurely”. The nice message will look like this:

It seems like the Microsoft response to this problem is simple: upgrade your HCK from 8.0 to 8.1 and the problem will go away. However, there is catch.

The newer HCK 8.1 does not support Windows XP, Windows Vista and Server 2003. Which means that if you are writing custom test jobs, you will probably need to have two different HCK servers covering all operation systems:

HCK 8.0 to cover XP, Vista and Server 2003
HCK 8.1 to cover 7, 8, 8.1, Server 2008, Server 2012, Server 2012 R2

The problem with this approach is synchronization of jobs. Since HCK is poorly documented, you most likely will not found any official documentation on how to synchronize jobs between two different versions of HCK. Moreover, as the HCK protocol is proprietory, it might not be possible at all in the future versions.

The most interesting thing is, in terms of features the difference between Windows 8.0 and Windows 8.1 is quite small. Conceptually, these are the same systems. How come that HCK 8.0 supports Windows 8.0 but does not support Windows 8.1 given the technical similarities? Perhaps, with this idea in mind, it will be possible to make Windows 8.1 work in HCK 8.0 and have a full coverage of all operating systems in a single HCK server?

In order to see what exactly is the problem, one has to dig into the MSI log of the installation. The message which indicates error usually mean a failure of some custom action inside MSI, and analysis of the custom action might shed more light on the problem.

The usual place for HCK installation is a temp folder the user. The file name is Windows Certification Kit Client_Install.log. Just search for this file and see if you get any hits. Once you found the file, open it in any text editor and try to see which is the exact problem of failure:

MSI (c) (C4:9C) [20:22:08:376]: Doing action: SetICFEnabled
Action start 20:22:08: SetICFEnabled.
MSI (c) (C4:AC) [20:22:08:408]: Invoking remote custom action. DLL: C:\Users\VSHCHE~1\AppData\Local\Temp\MSI4CD9.tmp, Entrypoint: SetICFProperties
MSI (c) (C4:B0) [20:22:08:454]: Cloaking enabled.
MSI (c) (C4:B0) [20:22:08:454]: Attempting to enable all disabled privileges before calling Install on Server
MSI (c) (C4:B0) [20:22:08:454]: Connected to service for CA interface.
CustomAction SetICFEnabled returned actual error code 1157 (note this may not be 100% accurate if translation happened inside sandbox)
MSI (c) (C4:9C) [20:22:08:532]: Note: 1: 1723 2: SetICFEnabled 3: SetICFProperties 4: C:\Users\VSHCHE~1\AppData\Local\Temp\MSI4CD9.tmp 
MSI (c) (C4:9C) [20:22:08:532]: Product: Windows Hardware Certification Kit Client -- Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor.  Action SetICFEnabled, entry: SetICFProperties, library: C:\Users\VSHCHE~1\AppData\Local\Temp\MSI4CD9.tmp 

As you can see above, the custom action SetICFProperties failed with some error. It is unclear what this custom action is doing, but as the name stands, ICF may mean Internet Connection Firewall. From this point, there are several choices to follow and understand what is the purpose of this custom action:

1. (easy) One may use Orca msi editor and remove the condition to run SetICFProperties custom action
2. (complex) One may use Ida disassembler and analyze the function SetICFProperties exported by some of the custom action dlls

I decided to go with the choice #1, however, it was clear for me that if disabling custom action did not solve anything, I would have to fallback to #2 anyway. So, here is the exact steps on what to do next:

1. Open Windows Hardware Certification Kit Client-x86_en-us.msi or Windows Hardware Certification Kit Client-x64_en-us.msi with Orca

2. Navigate to CustomAction table as shown on image

3. Remove entry of the table and memorize the name of the action: “SetICFEnabled”

4. Now, you need to navigate to InstallExecuteSequence table as shown on image and remove entry “SetICFEnabled”

5. Now, you need to navigate to InstallUISequence table as shown on image and remove entry “SetICFEnabled”

6. Save your modifications with Orca and try to run msi in Windows 8.1

7. Observe the MSI installs successfully

It turns out that SetICFProperties custom action is simply adding HCK Client communication port (tcp 1771) into exclusion list of your Microsoft Internet Connection Firewall. Since something has changed in Windows 8.1 regarding ICF the old way of adding exclusion for tcp port did not work and the whole MSI just failed with some generic error.

What it means for you is that despite patching the MSI with Orca you will probably need to add exclusion to tcp 1771 port (out and in) manually in your Windows 8.1 machines.

Windows Update nukes wi-fi connectivity on HP ElitePad 900

I got recently yesterday an update from Windows Update for my ElitePad’s wi-fi card (Qualcomm Atheros AR6000) dated as 3/4/2013 and having the version I have to say, that is the brightest update I ever received from Microsoft. Well, technically from Qualcomm (or HP), but Microsoft is responsible for a quality process involving HCK (Hardware Certification Kit) testing for drivers before they are shipped to users via Windows Update.

Right after installation my wi-fi connectivity was just gone and there was no way to recover it unless uninstalling device from Device Manager and rolling back to a previous version of the driver. After that, works like a charm.

Anyone interested to solve the problem?

P. S. I still find the most stable version for ElitePad’s wi-fi dated as 08 February 2013 as explained here.

Cannot locate any registered Enterprise message when running wttcl.exe from HCK

I’ve been doing some automation recently and came across a useful tool from Microsoft which allows to schedule remotely jobs in HCK called wttcl.exe. The tool typically can be found in C:\Program Files (x86)\Windows Kits\8.0\Hardware Certification Kit\Studio\ folder.

It’s ideal for doing some automation, since it supports all necessary commands to schedule jobs and manage machine pools in HCK. Just run it via command line and you will see all the commands:

General Usage: WttCl.exe [Plug-In name] [plug-in parameters]

Avaliable Plug-In are:


I was using the tool mainly in my HCK server, however I noticed that if I install HCK Studio on any arbitrary computer the wttcl.exe also gets installed. Calling wttcl.exe from non-HCK computer usually resulted in error message “Cannot locate any registered Enterprise” and a non-zero return value.

If you disassemble wttcl.exe you will notice that it uses plugins from the the subfolder WttCl which has plugins implemented as dlls. One level down there is a log folder called LOG. Here is a typical log folder path: C:\Program Files (x86)\Windows Kits\8.0\Hardware Certification Kit\Studio\WttCl\LOG

By looking into this folder I saw that each time I have a failure when running wttcl.exe I see the following messages in the log right before failure:

Arguments parsing completed.
Verbose mode has been changed to False.
Base arugments have been parsed.
Checking "MachineID" argument...
Checking "MachinePoolID" argument...
Splitting "MachineName" argument...
Plug-in specific arugments have been parsed.
Obtaining connection to identity server and getting all runtime controller names....
<strong>Identity Server and/or Database not specified. Fetching from registered enterprise</strong>

So, it seems like I am not passing the “Identity Database” to wttcl.exe and it makes it to fail. According to the tool’s command line help, Identity Database is a name of database which hosts HCK jobs. The default db name is DTMJobs, so passing the following paramater to wttcl.exe fixes the problem:

wttcl.exe … /IdentityDatabase:DTMJobs

Cannot remove computer from HCK (DTM) server

I came across an interesting problem recently when trying to remove computer from a pool in HCK (also known as DTM) server. Every time I try to remove the machine, following message appears:

‘MachineId’ still has data associated with ‘Resource’ in the data store ‘MachineId’.

If I look at the stack trace, it seems like the problem is related to a foreign key reference between several tables:

Message: The DELETE statement conflicted with the REFERENCE constraint "FK_LogoTarget_Resource". The conflict occurred in database "DTMJobs", table "dbo.LogoTarget", column 'MachineId'.
The statement has been terminated.
Source:.Net SqlClient Data Provider
Target site: Void OnError(System.Data.SqlClient.SqlException, Boolean, System.Action`1[System.Action])
Stack trace:
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
   at Microsoft.DistributedAutomation.Provider.Sql.SqlProvider.CommitToDataStore(DataStoreObject dso)
Additional data:
	Key='HelpLink.ProdName' Value='Microsoft SQL Server'
	Key='HelpLink.ProdVer' Value='10.50.1600'
	Key='HelpLink.EvtSrc' Value='MSSQLServer'
	Key='HelpLink.EvtID' Value='547'
	Key='HelpLink.BaseHelpUrl' Value=''
	Key='HelpLink.LinkId' Value='20476'

There is not so much information about this error. In fact, it seems like I am the only one having this problem in internet :). The quickiest way to fix this problem is to perform the following operations:

1. Install .NET 3.5 on the machine where you have HCK installed (step-by-step instructions)

2. Install SQL Server Management Studio 2008 (step-by-step instructions)

3. Using SQL Server Management Studio 2008 open database DTMJobs and locate table dbo.LogoTarget

4. Delete from dbo.LogoTarget the machine with which you expirience the problem

– in my case I have deleted all entries, since I was doing a clean-up of HCK before snapshot
– so I just run the following SQL command:

DELETE FROM dbo.LogoTarget

Now … It is not clear what led to this situation to happen. I might think of a recent meltdown of v-sphere server hosting my HCK installation, but this should not really affect MSSQL that much since it is transcational db …

HP ElitePad wi-fi network problems


I’ve got recently a tablet from HP: ElitePad 900 which is a very nice piece of engineering: it is slick, powerful, well-made and it seems to be the best tablet I ever kept in my hands.



However, once I configured HP Support tool to install automatically HP Updates I’ve noticed two important problems:

1. When I start device the wi-fi is no longer working

2. When I change wi-fi settings in Device Manager I got sometimes BSODs during reboots

The tablet itself does not have much ports, so if you don’t have any network, it is pretty much a show stopper issue: you can’t easily connect Ethernet cable, or USB key to try out some drivers, etc.


It’s important to note that when I got device the wi-fi was working perfectly. Since the problems started to appear after upgrades I decided to rollback the wi-fi driver. As for today (31st of March 2013), there are basically four versions of the wi-fi drivers for ElitePad:

1. 3.7 (02 Mar 2013)

2. 3.7 (20 Feb 2013)

3. 3.7 (08 Feb 2013)

4. 3.7 (14 Jan 2013)

Since I did not know which one I had originally installed on the device, I decided to go backwards starting from version 3.7 (02 Mar 2013) towards 3.7 (14 Jan 2013) by doing the following operations:

0. Download all four drivers to ElitePad by clicking on links above for each driver and then “Download” button. This way you will have all four versions of drivers ready for your tests.

1. Uninstall the current wi-fi driver from Device Manager (see article how to uninstall network card from Device Manager)

2. Reboot

3. Install the next driver

4. Check if network is back and if not, try another driver

By doing this test I realized that I had problems with 3.7 (02 Mar 2013) and 3.7 (20 Feb 2013) while the 3.7 (08 Feb 2013) seems to be stable and working properly. Since HP ships changelog for each version of the driver I was able to see the changelog entry between 3.7 (08 Feb 2013) and 3.7 (20 Feb 2013):

Fixes an issue where the wireless LAN does not function properly and an error symbol is displayed in the Device Manager.

The error is quite generic and taking into account that HP shipped a new version of the driver just 12 days after the previous release made me realize that this was probably an under tested hotfix. I will keep the oldish but stable driver for a while and see if any upcoming release will change something 🙂

aplay -l shows different card ids after reboot in Debian Squeeze GNOME

If you experience sound problems in Debian Squeeze (GNOME) this article should help you: However, sometime it happens that the card id you put into .asoundrc might be changed next time you boot your machine.

I’ve noticed that after a few reboots the aplay -l command shows that my card is assigned a new id. Therefore, I have to change file ~/.asoundrc and store there a new id of my card otherwise sound does not work in my browser. In order to automatize this process I have created a simple bash script which is doing the following:

1. Searches for card id by card name pattern (default pattern: LifeChat – this is a name of USB headphones I got)

2. Once card id is found it backups the original file /.asoundrc into /.asoundrc_bk

3. Once file is backup it writes a new card id into /.asoundrc

4. At this point your sounds should work in browser, flash, etc

The script can be put into “Startup Applications” in Debian so that each time you start your computer it syncs your .asoundrc file and sound works out of the box. Here is example output of the script on my machine:

[1] Searching for pattern LifeChat among installed cards: found: card 1: default [Microsoft LifeChat LX-3000 ], device 0: USB Audio [USB Audio]

[2] Fetching id from card: 1

[3] Configuring .asoundrc with card id 1: OK

Now, if you want to use the script, perform the following steps:

1. Download script from my blog either using command line or using browser:

2. Open using any text editor and locate line CARD_NAME=”LifeChat”

3. Specify your card name instead of “LifeChat”. In order to find out what cards you have run in terminal aplay -l

4. Save script and make it executable using chmod:

chmod +x ./

5. Run it and make sure there are no errors reported:


6. Make sure that you have sound – open browser, navigate to, etc, etc, etc

7. Once everything is okay, you can register script for auto-start in System->Preferences->Startup Applications

Bye-bye Windows, Hello Linux

First time ever in my life, I removed a licensed and activated Windows from my desktop computer and installed Linux instead. My 64-bit Windows 7 Ultimate was getting slower and slower with time, and I decided to try something new, stable and preferably free.

Before I always had a dual boot between Windows and Linux and all my data was on NTFS partition. Now a time for a change has come, so I got everything converted to ext4. So far so good (but I still keep backups on two external NTFS drives just in case).

I miss KeePass2 in Debian Squeeze x86_64 (for the moment it is present only in Wheezy) but I found that I can use keepassx :). Anyway, now i have no way back, license is lost and I am not going to get a new one. Debian runs quite well, and I pretty happy that I use it.

It took me 4 years to start using Linux more seriously, I first met Ubuntu in 2008 and since that time I got hooked up 🙂

Setting environmental variables in Mac OS (OSX) for Ida Pro

As a user of Ida Pro I recently bumped into a case when my Ida Pro could not connect to a license server because it was in another sub-network. So each time I was trying to open some file Ida was telling me:

License server error: Unable to checkout: LM-X Error: (Internal: 443 Feature: IDAPROFM)

Now, it seems like I have to set env variable HEXRAYS_LICENSE_PATH to point to my server ip address. But how to do it in OSX? After googling, and trying and googling and trying I finally found solution:

1. Open file / etc/launchd.conf with admin priviliges or create if it does not exist

2. Put inside the address of your server:


3. Save file and reboot!

It is essential to reboot, if you don’t do this, this may not work for you.

Web filtering went wrong for BitDefender Endpoint Security

I recently bumped into an interesting case when dealing with a web page blocked by BitDefender antivirus. If page is unsafe or blocked by system administrator, your browser will show a page like this:

As shown on the screenshot, the page being blocked is a Ukrainian version of Google search: Since I am not happy with this page being blocked, I’d like to find a solution on how to overcome this problem. And the logical step would be to try out another http client like a wget. For those who is not familiar with wget, it is a must have HTTP client from GNU/Linux world and can be dowloaded for Windows from here:

OK, let’s do wget and …

As you can see, wget works! Well, obviously the only difference between wget and chrome.exe is the binary name, since both are a valid HTTP clients. If Bitdefender is doing the filtering based on names of binaries, we can try to “forbid” wget by renaming it to chrome.exe and see if it fails with the same url. So, I rename wget.exe to chrome.exe and try again:

… and renaming wget.exe to chrome.exe made the Bitdefender blocking the page: 403 Blocked by Bitdefender. Now, it also means that doing reverse and renaming your chrome.exe (or any well-know browser like firefox.exe or opera.exe) to wget.exe or something.exe will allow you to bypass Bitdefender content checking policy and escape from being blocked.

This is not really a bug of Bitdefender, rather a poor design of the overall solution. I can’t really imagine that renaming binary name may allow you to escape from being monitored. I wonder, what are the other corner cases out there hidden in the code?

You can’t really find a good excuse for this bug because detecting browsers is not a rocket science. You may populate list of Add/Remove programs and check InstallLocation property and then check it when program executes, you may also detect which certificate is used to sign binary and see that it’s signed by Google or Mozilla or MS, or whatever. If you are lazy you may just build a predefine set of pathes (C:\Program Files\Firefox\*) and check all binaries from these paths. All these solutions have disadvantages and advantages but they are still a way better than just doing a check by process name.

I know that in modern IT Engineering world there is always a balance between “doing it quick and covering the marked before the competitor” and making it stable & solid. I just don’t understand why small things which can lead to a big consequences are not taken into account by a big software production companies which definitely have more resources than startups to make things right.

Debian Wheezy Beta 1 : white (corrupted) screen with square mouse pointer on ATI Radeon HD 3600 XT (ChipID = 0x9598)

As I was having problems with installation of Wheezy I could not sleep well and miss the opportunity to test the new Debian release 🙂 If you recall, here is what I got after installation of Wheezy on my machine:

This problem actually happened to me under GNOME and KDE and is resolved by installing firmware-linux-nonfree package from non-free repository. You may use to generate sources.list file for apt in order to include “non-free” repository which is disabeld by default. I was lucky because I was able to ssh to machine and install the package remotely. I guess, for those people without network configured this may be a show stopper.

If you are a newbie, and you have no idea of what I am talking about, and you expirience this problem, following these simple steps:

1. Configure network during installation of Wheezy. If your network is not configured and your screen does not work, you will not be able to connect remotely to machine to install the package

2. During package selection step in installer make sure you don’t uncheck “OpenSSH Server” (it is enabled by default)

3. When you observe this “white corrupted screen” try to connect to your machine via SSH. If it does not work, probably you did something wrong at steps 1 or 2

4. Once you are connected you need to add non-free package to apt sources.list file. Visit the following website: and select your country, your release (Wheezy), your architecture, and of course select “Non-free” in the checkbox

5. Aquire root priviliges by executing command:


6. As root edit file /etc/apt/sources.list and put inside content generated by

7. You may use any text editor a vi or nano. If you are newbie, enter the following:

nano /etc/apt/sources.list

8. Store the file and exit

9. Update apt by executing:

apt-get update

10. Install firmware-linux-nonfree package by executing

apt-get install firmware-linux-nonfree

11. Reboot!

Debian Wheeze expirience or why FreeBSD looks more interesting compared to Debian

I’ve been using Debian Squeeze (6.x) at home for a while and in general it was quite good not counting a few bugs with kernel, flash, sound and kde. Recently I saw that development in Wheezy (7.x) has frozen and I decided to upgrade …

Here is the result of my upgrade:

Well, I was reading that amount of bugs in Wheezy is quite high for frozen release, but was not expecting it would fail for me that miserably. Anyway, I had a chance to either go into the problem deeper or try out something new.

As recently at work I started a project under OSX I felt somehow interested in BSD internals (Apple Mac OS X is based on *BSD). So, I decided to go for FreeBSD 9.0 amd64 :). At first I felt scared of poor packaging system or lack of packages that I’ve used for a while under GNU/Linux, but eventually it turned out to be better than I’ve expected.

If you are coming from Debian to FreeBSD, you just have to substitute “sudo apt-get install something” with “sudo pkg_add -r something“. That is basically it. For example, I’ve installed KDE 4 in FreeBSD just by typing pkg_add -r kde4 . Well, you have to install a few dependencies (xorg) and do a few shamanic dances near the fire to make KDE auto-start but in general it looks very solid with a less amount of bugs that I’ve experienced on Debian.

Since I am a KDE/Gnome user, most of the stuff under FreeBSD on KDE/Gnome looks exactly the same as on any other environment running in Debian or Ubuntu, etc. I.e.:

– you want torrent? sudo pkg_add -r ktorrent

– you want vlc? sudo pkg_add -r vlc

– you want Google Chrome? sudo pkg_add -r chromium

– etc

I did not found (for the moment) any package which I used in Debian and which does not exist in FreeBSD. The only thing I don’t like is a poor flash support. You can install a few things from here and it will work for well-known sites like but once you will go to some news website with custom flash player, it does not work properly.

OTOH, FreeBSD feels more solid in terms of kernel stability, packaging and the overall development process seems to go faster than in Debian so I will definitely may forget about flash glitches and just benefit from stability 🙂

In the end, here is my FreeBSD experience … Compare it with the first screenshot at the beginning of the post 🙂

Mac OS X vs Windows kernel development: from hell to paradise (Part # 2)

This post is a continuation of the compulsive thoughts about OSX vs Windows kernel development started here: Part # 1

So, it’s being a while since I was writing about OSX. My project has grown up a little and it’s time to release a first alpha. The kernel extention (kext) behaves as a socket level and an IP filter level NKE which is doing a bunch of stuff related to networking and is a very very simular to the concepts of TDI and NDIS filters which do exist in Windows world.

If you are not familiar with TDI or NDIS this post might be not of any use for you, so you may just skip it. If you are, well … If you’ve been writing TDI filters you would know a few gotchas you have to follow in order to avoid any incompatibility problems with third party software. Since this post is more about OSX rather than Windows, I would not go too much into details on Windows. So, this is what makes development on Windows a hell:

1. Proper registration of your filter in PNP_TDI group

2. Handling special guys like Juniper VPN (NEO_FLT_xx_xx.sys)

3. Proper handling of not-sufficient IRP stack location count in your dispatch routine

4. Doing some tricks with Afd.sys IrpStackSize

5. Impossibility to dynamically unload TDI filter

6. Cross signing your code in order to load in x64 bit Windows

7. Remember verisign changing the key length recently ... ?

8. Messing up with difxapp in your msi code and catching ugly bugs with poor logs (but with very verbose output)

9. Setupping DTM & passing WHQL tests for each flavour of Windows

10. Shipping x32 / x64 binaries for TDI + NDIS IM filters separately

11. Spinlocks everywhere (remember: you might be called at DISPATCH_LEVEL IRQL)

Well, it already sounds complicated. In OSX world things are a way simpler:

1. Your socket level NKE or ip level NKE do not have to put themselfes into any groups. Just load it at any time using kextload

2. I did not (yet) encounter any incompatibility with VirualBox, VmWare, BitDefender. Maybe, later

3. There is no IRP concept in OSX world. You have mach ports, and it certainly looks simpler.NKE is using callbacks approach

4. Not applicable

5. You can unload your NKE kext in OSX, just make sure you unregister properly from appropriate protocol

6. You don't have to be signed to load kext. Just have to be root

7. Not applicable

8.  You don't have any complex frameworks for installations of drivers. Load via sh script or do it autoload

9. No DTM or WHQL nightmares. I like DTM framework and I use it for my automated tests, but I dislike the idea of paying per each submission and overall complexity behind the process

10. OSX has fat binaries concept, i.e. a single executable file, which has two or more binaries inside which are automatically loaded on a needed architecture. Cool, huh?

11. You are free to use mutexes, semaphors, read write mutexes, etc

What actually strikes me is that it’s quite hard to get a RBT (red-black tree) (mostly known as std::map in STL) in Windows kernel: you are stuck with AVL or splay trees via RtlGenericTable routines, while in OSX you are free to use either RBT or Splay Tree just by using a few macroses. You have a rich set of synchronization primitives for your needs (interlocked routines, mutexes, semaphores, lock groups, read write mutexes, etc) and you are not stuck to spinlocks like in TDI filters.

Despite that, when it comes to packaging stuff: everything is extremely simpler: you just create a fat binary, so your driver will load in x32 or x64 or ppc just out of the box. No need to provide any special packages for those architectures. Just single, universal package.

I already mentioned that Xcode is used to write drivers, which is not yet the case for Windows 7 and Visual Studio (in 2012!!!). What I did not mention is the approach Xcode uses when doing automation of build scripts via command line: when you compile your project in command line, it just creates a single folder in your project tree called “build”. This folder contains two files: compiled binary and compiled binary symbols. That’s it. Now, compile your project in Visual Studio and check how many obj files and other crap has crowed the directory 😉

What I did not talked about is … handling crash dumps. According to Microsoft, in a bug check callback, the system is dead and it is not safe to use any file system so the dump is stored in page file and then get’s extracted during next startup. You don’t have out of the box support to send crash dumps to some local server so that you will be able to analyze it.

In OSX world you can just configure to send your kernel panic dump to a file server (can be a simple Mac machine, not necessarily a server: Mac Book Air, or MacMini, or iMac, etc). Now, what happens when it panics is that it sends the dump right away, even over wi-fi connection :). I think for Redmond guys this is a very big shame :). If you are reading this and not getting red, please read it again :). Surprisingly it works well even on slow connection over wi-fi. Another interesting thing is that after the dump is sent to server, you can do remote connection over gdb and analyze the situation.

I am starting to like OSX more & more: it gives you quite rich api and does not put you in a strict hell boundaries you have in Windows world.

Asus P5QL PRO: no post, fans spin, switch off / switch on

I have this bizarre bug with my Asus P5QL-PRO motherboard: each time I go to sleep in OS, and the sleep is interrupted by power off event, on the next start the motherboard does the following:

1. Screen is off

2. No post signal

3. Fans continue to spin

4. Machine starts, then stops, then again starts

When I hit this problem for this first time I though my mobo is dead. After some googling it revealed that a CMOS re-set should help. So, remove CMOS battery, change jumper from 1-2 to 2-3 position, wait for 10 seconds and put everything back in reverse order.

Yesterday I got the same behavior after attaching some devices to mobo (I was rerouting USB devices from case), but CMOS reset did not help. I left machine with reset jumper in 2-3 position for two hours and still no results. So I started to unplug devices one by one trying to determine the cause. Once I removed memory – it all started to work perfectly :). I guess I might have touched the memory and it slightly got out or I have a new glitch with memory 🙂

Anyway, I put everything back again and now it works.

Asus P5QL Pro & Noctua NH-14 cooler

I’ve been recently playing with my oldish Asus P5QL Pro box trying to cool down CPU because the stock Intel cooler was already almost dead. I got idle temperature like 45 C or above. So, I decided to go for a Noctua NH-14 cooler which seems to be pretty effective when reading reviews, etc, etc.

I went to local shop, bought the brand new Noctua NH-14, went home, removed motherboard, started to mount the cooler and noticed that one of the mounting bolts is not threaded … That seemed like a factory glitch, which made me angry because the cooler is quite expensive and is targeted for “above average needs” market. So, it should be perfect, right?

Anyway, I went back to shop, had the cooler replaced, it took me extra 2 days to wait, and …

It looks quite tight, huh? NH-14 is fitting quite well my P5QL-PRO not interfering with RAM slots or any other pieces. It is quite tight near the power supply, so, depending on the size of your case it might be or might not be a good option for you. Now it is so tight in that corner that if I would like to remove power supply, I would probably need to remove the whole cooler from CPU which means also applying new thermopaste ..

The cooler is quite high, so you may also experience problems closing the side case. Now my idle is 26 C (during the summer) which makes me pretty happy 🙂

Configuring gtalk in pidgin in Debian 6 Squeeze x64 GNOME for Google Apps account

If you are going to use Google Apps account in pidgin you have to use a few tricks to make things working:

In Basic tab:

1. Change Domain from to

In Advanced tab:

2. Remove check “Require SSL/TLS” and set “Force old (port 5223) SSL

3. Change “Connect port” to 443

4. Change connect server to

This should do it. Here is a screenshot from my working setup:

Configuring ICQ in pidgin in Debian 6 Squeeze x64 GNOME

I’ve always had problems configuring ICQ account in my pidgin in Debian. Somehow it never worked although I tried all these different combinations like “Use clientlogin” or “Use SSL”, etc, etc, etc. It never worked out of the box like it happens with Kopete in Debian KDE.

At some point I thought it was a problem of server as the default login server you have when adding ICQ account is According to the server is correct and it should work flawlessly.

Despite the message in the ticket I decided to try instead of because it is shorter, and it makes perfect sense for me (I have no idea what is aol, and as I am going to use ICQ protocol, it is logical for me to use as an address) and guess what? It works!

Here comes the snapshot settings of my ICQ account in pidgin:

ABM handlebar conversion kit for Honda CBR 600 F4i (FSport) 2001

Just got handlebar conversion kit for my Honda CBR F4i from ABM. The kit basically looks like this:

It contains the following items:

1. Top yoke

2. Longer brake lines for front brakes

3. Longer clutch cable

4. Handlebar & handlebar ends

5. Longer wires for lights, etc

6. Dot 5.1 brake fluid

7. Some documentation in german 🙂

Can’t wait to mount the kit on my F4i!