Not a very common topic for me, but I thought it could be neat to mention some tips & tricks. I won’t go into the details of the Python GIMP SDK, most of it can be figured out from the GIMP documentation. I spent a total of one hour researching this topic, so I’m not an expert and I could have made mistakes, but perhaps I can save some effort to others which want to achieve the same results. You can jump to the end of the tutorial to find a nice skeleton batch script if you’re not interested in reading the theory.

To those wondering why GIMP, it’s because I created a new icon for Profiler and wanted to automatize some operations on it in order to have it in all sizes and flavors I need. One of the produced images had to be semi-transparent. So I thought, why not using a GIMP batch command, since anyway GIMP is installed on most Linux systems by default?

Just to mention, GIMP supports also a Lisp syntax to write scripts, but it caused my eyes to bleed profusely, so I didn’t even take into it consideration and focused directly on Python.

Of course, I could’ve tried other solutions like PIL (Python Imaging Library) which I have used in the past. But GIMP is actually nice, you can do many complex UI operations from code and you also have an interactive Python shell to test your code live on an image.

For example, open an image in GIMP, then open the Python console from Filters -> Python-Fu -> Console and execute the following code:

Python

1

2

img=gimp.image_list()[0]

img.layers[0].opacity=50.0

And you’ll see that the image is now halfway transparent. What the code does is to take the first image from the list of open images and sets the opacity of the first layer to 50%.

This is the nice thing about GIMP scripting: it lets you manipulate layers just like in the UI. This allows for very powerful scripting capabilities.

The first small issue I’ve encountered in my attempt to write a batch script, is that GIMP only accepts Python code as command line argument, not the path to a script on disk. According to the official documentation:

GIMP Python All this means that you could easily invoke a GIMP Python plug-in such as the one above directly from your shell using the (plug-in-script- fu-eval …) evaluator:

The idea behind it is that you create a GIMP plugin script, put it in the GIMP plugin directory, register methods like in the following small example script:

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

#! /usr/bin/env python

fromgimpfu import*

defecho(*args):

"""Print the arguments on standard output"""

print"echo:",args

register(

"console_echo","","","","","",

"<Toolbox>/Xtns/Languages/Python-Fu/Test/_Console Echo","",

[

(PF_STRING,"arg0","argument 0","test string"),

(PF_INT,"arg1","argument 1",100),

(PF_FLOAT,"arg2","argument 2",1.2),

(PF_COLOR,"arg3","argument 3",(0,0,0)),

],

[],

echo

)

main()

And then invoke the registered method from the command line as explained above.

I noticed many threads on stackoverflow.com where people were trying to figure out how to execute a batch script from the command line. Now, the obvious solution which came to my mind is to execute Python code from the command line which prepends the current path to the sys.path and then to import the batch script. So I searched and found that solution suggested by the user xenoid in this stackoverflow thread.

What took me most to understand was to call the method merge_visible_layers before saving the image. Initially, I was trying to do it without calling it and the saved image was not transparent at all. So I thought the opacity was not correctly set and tried to do it with other methods like calling gimp_layer_set_opacity, but without success.

I then tried in the console and noticed that the opacity is actually set correctly, but that that information is lost when saving the image to disk. I then found the image method flatten and noticed that the transparency was retained, but unfortunately the saved PNG background was now white and no longer transparent. So I figured that there had to be a method to obtain a similar result but without losing the transparent background. Looking a bit among the methods in the SDK I found merge_visible_layers. I think it’s important to point this out, in case you experience the same issue and can’t find a working solution just like it happened to me.

Now we have a working solution, but let’s create a more elegant one, which allows use to use GIMP from within the same script, without any external invocation.

I’m writing this post just for solidarity with those who share my nowadays not so popular opinions. There’s most likely zero chance of anyone else changing his mind.

Job Interviews

Back in the days when I was still working as an employee, I only experienced interviews in the shape of conversations aimed at establishing whether or not I had the necessary knowledge for the job.

I am grateful that I’m not looking for a job today, because those times have gone. Today, job interviews are made of questions and tests which can only establish whether the guy wasted enough time exercising for the interview. In fact, there are even books(!) to prepare someone for these interviews. This tells nothing about the person’s real skills and fitness for the job. There’s people specializing in passing job interviews… That’s the people you want to hire, yeah.

Many clever IT guys won’t even bother with such nonsense. I know I wouldn’t. Instead, I would just continue to look for a company which is smart.

What I’m saying is that important companies are missing out on real talent based on these ridiculous interviews. Don’t get me wrong, for me or people like me that is just perfect, because whenever we need to hire a brilliant software developer, it’s very easy. There are many talented people around who are easily captivated by a serious job interview.

Agile Development

I don’t have much to say about the subject, because I have never had the misfortune to work for a company which used agile development, but I want to recommend an excellent post by Michael O. Church, namely “Why “Agile” and especially Scrum are terrible”, which I read a few years ago.

At the time I was searching for a funny rant against agile development and that’s how I got to this very funny and insightful read. I found many of my own views represented in his writing.

I really haven’t got anything to add to Michael’s post, because, being a low-level guy, any contact with agile development is unlikely for me.

Back in the old days, the retarded bullshit we had was called UML. Then, apparently, someone thought that UML wasn’t nearly retarded enough and came up with agile development, which is a million times more retarded.

What I think is funny is that some people defend agile as not being entirely bad in certain regards, because agile tries to claim for itself common sense and basic principles. Developers who actually need to be told these basic principles should gain experience before developing major projects in the first place and managers who need it shouldn’t manage anyone at all.

Quoting Michael:

Like a failed communist state that equalizes by spreading poverty, Scrum in its purest form puts all of engineering at the same low level: not a clearly spelled-out one, but clearly below all the business people who are given full authority to decide what gets worked on.

This is because agile development gives the illusion to managers who don’t understand the technology that they are in control of the development process. That’s the reason why it has become so popular. Just like open-space offices give to the same managers (and owners) the illusion of productivity. “Oh, it’s buzzing! I’m getting value for my money!”.

Open Spaces

Another brilliant idea which became trendy. I’m late at criticizing it, because there are already many articles / studies / polls saying that open spaces are terrible. Anyway, it’s a good example of how something stupid got popular and still is. I have worked in open spaces myself and it’s extremely stressful and ineffective.

“How can we make people who have to think for a living more productive? I know! Let’s put noise and people moving around them!”

Open spaces force you to look busy even if you’re not. Whoever thinks that it’s possible to write code for 8 hours a day, every day, for a long period of time has never programmed in his entire life. I can program intensively 5-6 hours a day for a sustained period of time, but even that is a lot. Four hours is more realistic. And I have always been an over-achiever. Forcing people to waste their time on social media and YouTube to look busy is just stupid.

Quoting Bill Hicks:

“Hicks! How come you’re not working?”. I go: “There’s nothing to do”, “Well, you pretend that you’re working”, “Why don’t you pretend I’m working? Yeah, you get paid more than me, you fantasize!”

That’s why people who work for me are completely free to organize their time as they wish. Companies should hire talented people and talented people don’t need a baby-sitter. Unless she’s hot.

Diversity

New definition of “inclusion”: let’s treat people differently because of what they are or represent, either on the workspace or on social media. And let’s over-praise their achievements. This will be fair to the people outside of their group and to the people who are really clever and which belong to that group. Whatever minority that is.

People should be hired, promoted and awarded based on their merits. Not because of what they are or represent. The current trend is the result of a culture which favors good intentions and feelings over reason and logic, which in a technical field is even more ludicrous.

The pyramids were built on the sweat, blood and tears of many men. Not by singing Kumbaya while holding hands in a circle.

Making complex things is hard.

Having said that, I absolutely encourage neuro-diversity. Many companies should hire someone who isn’t an idiot for a change.

To restart my career as a technical writer, I chose a light topic. Namely, running applications compiled with new versions of Visual Studio on Windows XP. I didn’t find any prior research on the topic, but I also didn’t search much. There’s no real purpose behind this article, beyond the fact that I wanted to know what could prevent a new application to run on XP. Our target application will be the embedded version of Python 3.7 for x86.

If we try to start any new application on XP, we’ll get an error message informing us that it is not a valid Win32 application. This happens because of some fields in the Optional Header of the Portable Executable.

Most of you probably already know that you need to adjust these fields as follows:

Fortunately, it’s enough to adjust the fields in the executable we want to start (python.exe), there’s no need to adjust the DLLs as well.

If we try run the application now, we’ll get an error message due to a missing API in kernel32. So let’s turn our attention to the imports.

We have a missing vcruntime140.dll, then a bunch of “api-ms-win-*” DLLs, then only python37.dll and kernel32.dll.

The first thing which comes to mind is that in new applications we often find these “api-ms-win-*” DLLs. If we search for the prefix in the Windows directory, we’ll find a directory both in System32 and SysWOW64 called “downlevel”, which contains a huge list of these DLLs.

As we’ll see later, these DLLs aren’t actually used, but if we open one with a PE viewer, we’ll see that it contains exclusively forwarders to APIs contained in the usual suspects such as kernel32, kernelbase, user32 etc.

Interestingly, in the downlevel directory we can’t find any of the files imported by python.exe. These DLLs actually expose C runtime APIs like strlen, fopen, exit and so on.

If we don’t have any prior knowledge on the topic and just do a string search inside the Windows directory for such a DLL name, we’ll find a match in C:\Windows\System32\apisetschema.dll. This DLL is special as it contains a .apiset section, whose data can easily be identified as some sort of format for mapping “api-ms-win-*” names to others.

1

2

3

4

5

6

7

8

9

10

11

12

Offset0123456789ABCDEFAscii

00013AC0C83A010020000000730074006F007200.:......s.t.o.r.

00013AD061006700650075007300610067006500a.g.e.u.s.a.g.e.

00013AE02E0064006C006C006500780074002D00..d.l.l.e.x.t.-.

00013AF06D0073002D00770069006E002D007300m.s.-.w.i.n.-.s.

00013B00780073002D006F006C00650061007500x.s.-.o.l.e.a.u.

00013B1074006F006D006100740069006F006E00t.o.m.a.t.i.o.n.

00013B202D006C0031002D0031002D0030000000-.l.1.-.1.-.0...

00013B30000000000000000000000000443B0100............D;..

00013B400E0000007300780073002E0064006C00....s.x.s...d.l.

00013B506C0000006500780074002D006D007300l...e.x.t.-.m.s.

Searching on the web, the first resource I found on this topic were two articles on the blog of Quarkslab (Part 1 and Part 2). However, I quickly figured that, while useful, they were too dated to provide me with up-to-date structures to parse the data. In fact, the second article shows a version number of 2 and at the time of my writing the version number is 6.

1

2

3

Offset0123456789ABCDEFAscii

0000000006000000....

Just for completeness, after the publication of the current article, I was made aware of an article by deroko about the topic predating those of Quarkslab.

Anyway, I searched some more and found a code snippet by Alex Ionescu and Pavel Yosifovich in the repository of Windows Internals. I took the following structures from there.

C

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

typedefstruct_API_SET_NAMESPACE{

ULONG Version;

ULONG Size;

ULONG Flags;

ULONG Count;

ULONG EntryOffset;

ULONG HashOffset;

ULONG HashFactor;

}API_SET_NAMESPACE,*PAPI_SET_NAMESPACE;

typedefstruct_API_SET_HASH_ENTRY{

ULONG Hash;

ULONG Index;

}API_SET_HASH_ENTRY,*PAPI_SET_HASH_ENTRY;

typedefstruct_API_SET_NAMESPACE_ENTRY{

ULONG Flags;

ULONG NameOffset;

ULONG NameLength;

ULONG HashedLength;

ULONG ValueOffset;

ULONG ValueCount;

}API_SET_NAMESPACE_ENTRY,*PAPI_SET_NAMESPACE_ENTRY;

typedefstruct_API_SET_VALUE_ENTRY{

ULONG Flags;

ULONG NameOffset;

ULONG NameLength;

ULONG ValueOffset;

ULONG ValueLength;

}API_SET_VALUE_ENTRY,*PAPI_SET_VALUE_ENTRY;

The data starts with a API_SET_NAMESPACE structure.

Count specifies the number of API_SET_NAMESPACE_ENTRY and API_SET_HASH_ENTRY structures. EntryOffset points to the start of the array of API_SET_NAMESPACE_ENTRY structures, which in our case comes exactly after API_SET_NAMESPACE.

Every API_SET_NAMESPACE_ENTRY points to the name of the “api-ms-win-*” DLL via the NameOffset field, while ValueOffset and ValueCount specify the position and count of API_SET_VALUE_ENTRY structures. The API_SET_VALUE_ENTRY structure yields the resolution values (e.g. kernel32.dll, kernelbase.dll) for the given “api-ms-win-*” DLL.

With this information we can already write a small script to map the new names to the actual DLLs.

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

importos

fromPro.Core import*

fromPro.PE import*

defmain():

c=createContainerFromFile("C:\\Windows\\System32\\apisetschema.dll")

pe=PEObject()

ifnotpe.Load(c):

print("couldn't load apisetschema.dll")

return

sect=pe.SectionHeaders()

nsects=sect.Count()

d=None

foriinrange(nsects):

ifsect.Bytes(0)==b".apiset\x00":

cs=pe.SectionData(i)[0]

d=CFFObject()

d.Load(cs)

break

sect=sect.Add(1)

ifnotd:

print("could find .apiset section")

return

n,ret=d.ReadUInt32(12)

offs,ret=d.ReadUInt32(16)

foriinrange(n):

name_offs,ret=d.ReadUInt32(offs+4)

name_size,ret=d.ReadUInt32(offs+8)

name=d.Read(name_offs,name_size).decode("utf-16")

line=str(i)+") "+name+" ->"

values_offs,ret=d.ReadUInt32(offs+16)

value_count,ret=d.ReadUInt32(offs+20)

forjinrange(value_count):

vname_offs,ret=d.ReadUInt32(values_offs+12)

vname_size,ret=d.ReadUInt32(values_offs+16)

vname=d.Read(vname_offs,vname_size).decode("utf-16")

line+=" "+vname

values_offs+=20

offs+=24

print(line)

main()

This code can be executed with Cerbero Profiler from command line as “cerpro.exe -r apisetschema.py”. These are the first lines of the produced output:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

0)api-ms-onecoreuap-print-render-l1-1-0->printrenderapihost.dll

1)api-ms-onecoreuap-settingsync-status-l1-1-0->settingsynccore.dll

2)api-ms-win-appmodel-identity-l1-2-0->kernel.appcore.dll

3)api-ms-win-appmodel-runtime-internal-l1-1-3->kernel.appcore.dll

4)api-ms-win-appmodel-runtime-l1-1-2->kernel.appcore.dll

5)api-ms-win-appmodel-state-l1-1-2->kernel.appcore.dll

6)api-ms-win-appmodel-state-l1-2-0->kernel.appcore.dll

7)api-ms-win-appmodel-unlock-l1-1-0->kernel.appcore.dll

8)api-ms-win-base-bootconfig-l1-1-0->advapi32.dll

9)api-ms-win-base-util-l1-1-0->advapi32.dll

10)api-ms-win-composition-redirection-l1-1-0->dwmredir.dll

11)api-ms-win-composition-windowmanager-l1-1-0->udwm.dll

12)api-ms-win-core-apiquery-l1-1-0->ntdll.dll

13)api-ms-win-core-appcompat-l1-1-1->kernelbase.dll

14)api-ms-win-core-appinit-l1-1-0->kernel32.dll kernelbase.dll

...

Going back to API_SET_NAMESPACE, its field HashOffset points to an array of API_SET_HASH_ENTRY structures. These structures, as we’ll see in a moment, are used by the Windows loader to quickly index a “api-ms-win-*” DLL name. The Hash field is effectively the hash of the name, calculated by taking into consideration both HashFactor and HashedLength, while Index points to the associated API_SET_NAMESPACE_ENTRY entry.

The code which does the hashing is inside the function LdrpPreprocessDllName in ntdll:

Assembly (x86)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

77EA1DACmovebx,dwordptr[ebx+0x18]; HashFactor in ebx

77EA1DAFmovesi,eax; esi = dll name length

77EA1DB1movzxeax,wordptr[edx]; one unicode character into eax

77EA1DB4leaecx,dwordptr[eax-0x41]; ecx = character - 0x41

77EA1DB7cmpcx,0x19; compare to 0x19

77EA1DBBjbe0x77ea2392; if below or equal, bail out

77EA1DC1movecx,ebx; ecx = HashFactor

77EA1DC3movzxeax,ax

77EA1DC6imulecx,edi; ecx *= edi

77EA1DC9addedx,2; edx += 2

77EA1DCCaddecx,eax; ecx += eax

77EA1DCEmovedi,ecx; edi = ecx

77EA1DD0subesi,1; len -= 1

77EA1DD3jne0x77ea1db1; if not zero repeat from 77EA1DB1

Or more simply in C code:

C

1

2

3

4

5

6

constchar*p=dllname;

intHashedLength=0x23;

intHashFactor=0x1F;

intHash=0;

for(inti=0;i<HashedLength;i++,p++)

Hash=(Hash*HashFactor)+*p;

As a practical example, let’s take the DLL name “api-ms-win-core-processthreads-l1-1-2.dll”. Its hash would be 0x445B4DF3. If we find its matching API_SET_HASH_ENTRY entry, we’ll have the Index to the associated API_SET_NAMESPACE_ENTRY structure.

1

2

3

4

Offset0123456789ABCDEFAscii

00014DA0F34D5B44.M[D

00014DB05B000000[...

So, 0x5b (or 91) is the index. By going back to the output of mappings, we can see that it matches.

1

91)api-ms-win-core-processthreads-l1-1-3->kernel32.dll kernelbase.dll

By inspecting the same output, we can also notice that all C runtime DLLs are resolved to ucrtbase.dll.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

167)api-ms-win-crt-conio-l1-1-0->ucrtbase.dll

168)api-ms-win-crt-convert-l1-1-0->ucrtbase.dll

169)api-ms-win-crt-environment-l1-1-0->ucrtbase.dll

170)api-ms-win-crt-filesystem-l1-1-0->ucrtbase.dll

171)api-ms-win-crt-heap-l1-1-0->ucrtbase.dll

172)api-ms-win-crt-locale-l1-1-0->ucrtbase.dll

173)api-ms-win-crt-math-l1-1-0->ucrtbase.dll

174)api-ms-win-crt-multibyte-l1-1-0->ucrtbase.dll

175)api-ms-win-crt-private-l1-1-0->ucrtbase.dll

176)api-ms-win-crt-process-l1-1-0->ucrtbase.dll

177)api-ms-win-crt-runtime-l1-1-0->ucrtbase.dll

178)api-ms-win-crt-stdio-l1-1-0->ucrtbase.dll

179)api-ms-win-crt-string-l1-1-0->ucrtbase.dll

180)api-ms-win-crt-time-l1-1-0->ucrtbase.dll

181)api-ms-win-crt-utility-l1-1-0->ucrtbase.dll

I was already resigned at having to figure out how to support the C runtime on XP, when I noticed that Microsoft actually supports the deployment of the runtime on it. The following excerpt from MSDN says as much:

If you currently use the VCRedist (our redistributable package files), then things will just work for you as they did before. The Visual Studio 2015 VCRedist package includes the above mentioned Windows Update packages, so simply installing the VCRedist will install both the Visual C++ libraries and the Universal CRT. This is our recommended deployment mechanism. On Windows XP, for which there is no Universal CRT Windows Update MSU, the VCRedist will deploy the Universal CRT itself.

Which means that on Windows editions coming after XP the support is provided via Windows Update, but on XP we have to deploy the files ourselves. We can find the files to deploy inside C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs. This path contains three sub-directories: x86, x64 and arm. We’re obviously interested in the x86 one. The files contained in it are many (42), apparently the most common “api-ms-win-*” DLLs and ucrtbase.dll. We can deploy those files onto XP to make our application work. We are still missing the vcruntime140.dll, but we can take that DLL from the Visual C++ installation. In fact, that DLL is intended to be deployed, while the Universal CRT (ucrtbase.dll) is intended to be part of the Windows system.

This satisfies our dependencies in terms of DLLs. However, Windows introduced many new APIs over the years which aren’t present on XP. So I wrote a script to test the compatibility of an application by checking the imported APIs against the API exported by the DLLs on XP. The command line for it is “cerpro.exe -r xpcompat.py application_path”. It will check all the PE files in the specified directory.

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

importos,sys

fromPro.Core import*

fromPro.PE import*

xp_system32="C:\\Users\\Admin\\Desktop\\system32"

apisetschema={"OMITTED FOR BREVITY"}

cached_apis={}

missing_result={}

defgetAPIs(dllpath):

apis={}

c=createContainerFromFile(dllpath)

dll=PEObject()

ifnotdll.Load(c):

print("error: couldn't load dll")

returnapis

ordbase=dll.ExportDirectory().Num("Base")

functions=dll.ExportDirectoryFunctions()

names=dll.ExportDirectoryNames()

nameords=dll.ExportDirectoryNameOrdinals()

n=functions.Count()

it=functions.iterator()

forxinrange(n):

func=it.next()

ep=func.Num(0)

ifep==0:

continue

apiord=str(ordbase+x)

n2=nameords.Count()

it2=nameords.iterator()

name_found=False

foryinrange(n2):

no=it2.next()

ifno.Num(0)==x:

name=names.At(y)

offs=dll.RvaToOffset(name.Num(0))

name,ret=dll.ReadUInt8String(offs,500)

apiname=name.decode("ascii")

apis[apiname]=apiord

apis[apiord]=apiname

name_found=True

break

ifnotname_found:

apis[apiord]=apiord

returnapis

defcheckMissingAPIs(pe,ndescr,dllname,xpdll_apis):

ordfl=pe.ImportOrdinalFlag()

ofts=pe.ImportThunks(ndescr)

it=ofts.iterator()

whileit.hasNext():

ft=it.next().Num(0)

if(ft&ordfl)!=0:

name=str(ft^ordfl)

else:

offs=pe.RvaToOffset(ft)

name,ret=pe.ReadUInt8String(offs+2,400)

ifnotret:

continue

name=name.decode("ascii")

ifnotname inxpdll_apis:

print(" ","missing:",name)

temp=missing_result.get(dllname,set())

temp.add(name)

missing_result[dllname]=temp

defverifyXPCompatibility(fname):

print("file:",fname)

c=createContainerFromFile(fname)

pe=PEObject()

ifnotpe.Load(c):

return

it=pe.ImportDescriptors().iterator()

ndescr=-1

whileit.hasNext():

descr=it.next()

ndescr+=1

offs=pe.RvaToOffset(descr.Num("Name"))

name,ret=pe.ReadUInt8String(offs,400)

ifnotret:

continue

name=name.decode("ascii").lower()

ifnotname.endswith(".dll"):

continue

fwdlls=apisetschema.get(name[:-4],[])

iflen(fwdlls)==0:

print(" ",name)

else:

fwdll=fwdlls[0]

print(" ",name,"->",fwdll)

name=fwdll

ifname=="ucrtbase.dll":

continue

xpdll_path=os.path.join(xp_system32,name)

ifnotos.path.isfile(xpdll_path):

continue

ifnotname incached_apis:

cached_apis[name]=getAPIs(xpdll_path)

checkMissingAPIs(pe,ndescr,name,cached_apis[name])

print()

defmain():

ifos.path.isfile(sys.argv[1]):

verifyXPCompatibility(sys.argv[1])

else:

files=[os.path.join(dp,f)fordp,dn,fn inos.walk(sys.argv[1])forfinfn]

forfname infiles:

withopen(fname,"rb")asf:

iff.read(2)==b"MZ":

verifyXPCompatibility(fname)

# summary

n=0

print("\nsummary:")

forrdll,rapis inmissing_result.items():

print(" ",rdll)

forrapi inrapis:

print(" ","missing:",rapi)

n+=1

print("total of missing APIs:",str(n))

main()

I had to omit the contents of the apisetschema global variable for the sake of brevity. You can download the full script from here. The system32 directory referenced in the code is the one of Windows XP, which I copied to my desktop.

And here are the relevant excerpts from the output:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

file:python-3.7.0-embed-win32\python37.dll

version.dll

shlwapi.dll

ws2_32.dll

kernel32.dll

missing:GetFinalPathNameByHandleW

missing:InitializeProcThreadAttributeList

missing:UpdateProcThreadAttribute

missing:DeleteProcThreadAttributeList

missing:GetTickCount64

advapi32.dll

vcruntime140.dll

api-ms-win-crt-runtime-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-math-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-locale-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-string-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-stdio-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-convert-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-time-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-environment-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-process-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-heap-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-conio-l1-1-0.dll->ucrtbase.dll

api-ms-win-crt-filesystem-l1-1-0.dll->ucrtbase.dll

[...]

file:python-3.7.0-embed-win32\_socket.pyd

ws2_32.dll

missing:inet_ntop

missing:inet_pton

kernel32.dll

python37.dll

vcruntime140.dll

api-ms-win-crt-runtime-l1-1-0.dll->ucrtbase.dll

[...]

summary:

kernel32.dll

missing:InitializeProcThreadAttributeList

missing:GetTickCount64

missing:GetFinalPathNameByHandleW

missing:UpdateProcThreadAttribute

missing:DeleteProcThreadAttributeList

ws2_32.dll

missing:inet_pton

missing:inet_ntop

total of missing APIs:7

We’re missing 5 APIs from kernel32.dll and 2 from ws2_32.dll, but the Winsock APIs are imported just by _socket.pyd, a module which is loaded only when a network operation is performed by Python. So, in theory, we can focus our efforts on the missing kernel32 APIs for now.

My plan was to create a fake kernel32.dll, called xernel32.dll, containing forwarders for most APIs and real implementations only for the missing ones. Here’s a script to create C++ files containing forwarders for all APIs of common DLLs on Windows 10:

The comment on the right (“// XP”) indicates whether the forwarded API is present on XP or not. We can provide real implementations exclusively for the APIs we want. The Windows loader doesn’t care whether we forward functions which don’t exist as long as they aren’t imported.

The APIs we need to support are the following:

GetTickCount64: I just called GetTickCount, not really importantGetFinalPathNameByHandleW: took the implementation from Wine, but had to adapt it slightlyInitializeProcThreadAttributeList: took the implementation from WineUpdateProcThreadAttribute: sameDeleteProcThreadAttributeList: same

I have to be grateful to the Wine project here, as it provided useful implementations, which saved me the effort.

I called the attempt at a support runtime for older Windows versions “XP Time Machine Runtime” and you can find the repository here. I compiled it with Visual Studio 2013 and cmake.

So that we have now our xernel32.dll, the only thing we have to do is to rename the imported DLL inside python37.dll.

Let’s try to start python.exe.

Awesome.

Of course, we’re still not completely done, as we didn’t implement the missing Winsock APIs, but perhaps this and some more could be the content of a second part to this article.

This post comes after a very long hiatus on my side in relation to this personal blog. During the past years I have been very busy with work and other activities, but in the last months I took a break and started to re-think my life.

One of the consequences of this process, has been the revamping of NTCore and the decision to provide it with new content in the shape of articles and programs. In fact, I wanted to start with a technical article, but then some considerations crept into my mind and I wanted to share them.

One of the reasons I stopped writing about interesting things and to dedicate spare time to my IT hobby, was that too much of my time was being spent on work related IT activities not connected to the development of Cerbero Profiler. Anyone who has ever worked for a company with incompetent managers, can understand this perfectly. There are companies, large or small, which kill the passion for whatever you enjoyed doing before working for them.

One classic example is a company which had luck with its first product, because it was the right product at the right time and then tries to replicate its first success with an endless amount of new projects all doomed to fail. The reason they do it is because they don’t want their company to rely only on one product. The reason they fail is because they were lucky, not clever, with their first product.

Unfortunately, the boost of arrogance caused by the first hit is enough to eclipse all the following failures, which may or not, depending on the success of the first product, bring the company to collapse.

The technical workforce in such a company is divided into two groups. The first group works on the first product, aka the cash cow. This group endures enormous pressure, because the entire faith of the company depends on them. Not only that, but the pressure increases whenever money is wasted on the other useless side-projects. The frustration of this group stems from the fact that they are the only ones being put under pressure and that their work has to finance the, from their side perceived, non-work of the others.

The second groups works on the side-projects which are doomed to fail. The clever technical people in this group already know that these projects will fail, but that doesn’t change anything in the decisions taken by the company. The frustration of this group stems from continuously doing useless things, which nobody cares about and not being appreciated like the people in the first group.

In such an environment, it doesn’t matter to which group you belong to, if you understand the big picture or if you just consider it your day job. You’re screwed regardless. The difference is that the people of the first group tend to last longer, but the toxic environment of the company will consume them as well in the long run. The people of the second group are the ones being consumed faster and there’s a reason for that.

I heard that some large companies take into account the psychological effects on a software developer who worked on a major project, which then got canceled. These companies make sure that the employee is then assigned to the development team of an already established product. This is to avoid the re-occurrence of the same situation for the developer and the psychological strain it would generate for him.

If you currently work for a company of the earlier category, I can give you only one advice: resign and do something else. Cultivate crops, hunt, forge steel or build roads. Anything is better than enduring the bullshit of such a place. You can do it for a time if you need to, but you have to know when to stop.

For years I wasn’t able to live from the profits of my commercial product and needed a day job, then in the last years the situation changed, but I still didn’t stop my other activity for a number of reasons. In the beginning profits were still uncertain and I also figured that more money was even better.

The ironic thing is that even though you may earn more money, you are also more inclined to spend it easily. This is because of the work-caused mental fatigue which forces your brain to look for continuous gratification to alleviate the pain. So you end up in a fancy apartment, with a big TV, a nice car, etc. It requires some effort to break the routine and part from that situation. Effort which isn’t caused by the difficulty to give up a materialistic life-style, but to one’s mental fatigue which makes it hard to start any new endeavor.

That isn’t to say that I dislike money. In fact, one of the reasons I changed my life is that the money wasn’t nearly good enough for the amount of stress I had to face. I am neither a materialistic person nor a hippie. I can live with little money or with tons of it. It doesn’t change who I am.

It’s been only 10 months since I changed things and started to re-organize my life. The initial months were spent mostly on personal matters, logistics and recovering my physical health. Even though I always kept in shape and did a lot of sport, the stress still had effects on my overall well-being.

I spent the following months on relaxing my mind, making projects for the future and even starting a new hobby, knife making.

Of course, I still worked on my commercial product from time to time, but even that required a thinking pause as the new 3.0 version approaches and it’s a good point in time for some interesting and major improvements. I also made new important business deals unrelated to my product, which wouldn’t have happened if I hadn’t changed things.

That brings us to now and to my wish to rekindle my passion for IT and to the actual topic of the post.

It’s impossible for someone who grew up playing with SoftICE, like myself, not to notice the differences in approaching the field of IT back then and doing it now. In the past, we spent our time on IRC, which was a lot more fun than Twitter. We had less technologies to focus on. The result was that we were more focused and less distracted.

Not only that. We were small communities in which you could gain appreciation for some days of work writing a small utility or writing an article. Today nobody gives a fuck. Your article or code is just a drop in the ocean or a tweet in the movie “The Birds”.

Nowadays the IT field exploded with many new fields and disciplines, many of which 20 years ago were relegated to academic research, were insignificantly small or weren’t there at all. Distributed computing, machine learning, mobile development, virtualization etc.

At the same time, the amount of people and money in the IT industry also caused the explosion of bullshit. From IT security up until the retarded bullshit of agile development.

Although this may just seem another “things were better before” comment, it’s not really the point of it. There’s a natural process of commercialization from something which is niche to something which becomes common and consumed by the masses, which makes the field for those belonging to the initial niche less appealing. This is normal.

What is interesting is that we lose interest in things today, because we are overclocked. By this technical reference I mean that we are overstimulated. We developed a numbness in regard to technology because we were exposed to too many (mostly useless) innovations in an excessive amount which our brain couldn’t absorb and so it gave up and lost interest.

While, of course, no one can centrally control the amount of innovations which globally come out every day, individual companies can limit the amount of innovations within their own products for our brains to be able to appreciate them.

There’s a reason why nobody cares today when the new Windows is released. Many stopped caring after Windows Vista and most after Windows 7. Remember when the release of a new Windows was a big event? Remember how respected the work of Matt Pietrek and Sven B. Schreiber was? It’s not just because they were pioneers. The reason is that we cared beyond having a resource to help us implement our daily piece of code.

We had the illusion that technology was a progression towards improvement. And now we are disillusioned.

In my old rants against Microsoft, wherein I predict the failure of products like Windows Phone and Silverlight, it is possible to notice the increasing disillusionment. Let me quote an old post from 2011:

Moreover, Windows could be improved to an endless extent without re-inventing the wheel every 2 years. If the decisions were up to me I would work hard on micro-improvements. Introduce new sets of native APIs along Win32. And I’d do it gradually, with care and try to give them a strong coherency. I would try to introduce benefits which could be enjoyed even by applications written 15 years ago. The beauty should lie in the elegance in finding ingenious solutions for extending what is already there, not by doing tabula rasa every time. I would make developers feel at home and that their time and code is highly valued, instead of making them feel like their creations are always obsolete compared to my brand new technology which, by the way, nobody uses.

To be clear, it isn’t just Microsoft. All the big players make the same mistake. During Jobs’ era at Apple we had a controlled amount of improvements which we could appreciate. When Jobs died, Apple became the same as any other company and today nobody cares about Apple products as well.

The gist of my theory is what follows. The majority of people use Windows or the iPhone to do a number of things. While a minority of people may think it’s cool to have yet a slimmer phone without headphone jack or charging it without a wire, these are actually regressions (having to buy new adapters or headphones from Apple, more easily breaking your phone because the back is made out of glass) and they annoy the majority, while also numbing their capacity to absorb improvements.

If you add to your product 50 new things and only 5 of those are actual improvements, even those 5 improvements will become an indistinguishable blur among the other 45 and won’t even be perceived.

And just to hammer my point home, let’s take a Victorinox Swiss Army Knife (yes, I grew up watching MacGyver). It has more than a hundred years of history and it is perfect as it is. Of course, a minority of people may think that adding pizza cutter to it may be essential, but Victorinox doesn’t work for a minority. Yes, every now and then a new model of knife comes out intended for a particular group of people like sailing enthusiasts or IT workers, but the classic models have more or less remained unchanged throughout the decades. What happened is that they went over countless micro-improvements which brought them to the state-of-the-art tools they are today.

An OS, just like any important piece of technology, should give the user the same satisfaction a Victorinox SAK gives to its holder.

These are some of the considerations which crossed my mind while trying to make again my entrance in the IT world. They will reflect on my work and over the next months I will put my money where my mouth is.

After over a decade, I finally took two afternoons to revamp this personal web-page and to merge the content of the old NTCore page with the content of its blog (rcecafe.net). All the URLs of the old web-page and blog have been preserved in the process.