Wednesday, 26 November 2008
Fedora 10 & OCaml
You can join in the general Fedora 10 fun here, but a quick note that Fedora 10 comes with stable OCaml 3.10.2 and 68 OCaml packages, making us the fastest, best supported functional language in Fedora.
Monday, 24 November 2008
Common mistakes cross-compiling MinGW packages
Using the headers from /usr/include
The headers in/usr/include
are for the native libraries installed on the system, and it's highly unlikely they will work for cross-compilation. By "won't work" I mean that types and structure fields could be different, resulting in a segfault.The Fedora MinGW project takes two steps to avoid using native libraries by accident: Firstly GCC is configured so it looks in
/usr/i686-pc-mingw32/sys-root/mingw/include
and never looks in /usr/include
(as long as you don't tell it to). Secondly we supply a replacement %{_mingw32_configure}
RPM macro which sets PKG_CONFIG_PATH
, so any pkg-config done will pick up the cross-compiled libraries' configuration instead of any native libraries' configuration.
$ PKG_CONFIG_PATH=/usr/i686-pc-mingw32/sys-root/mingw/lib/pkgconfig \
pkg-config --cflags glib-2.0
-mms-bitfields -I/usr/i686-pc-mingw32/sys-root/mingw/include/glib-2.0
-I/usr/i686-pc-mingw32/sys-root/mingw/lib/glib-2.0/include
One thing that can still go wrong is that you don't have the cross-compiled library installed and it then picks up the native library. For example, you missed a BuildRequires line. That mistake usually becomes evident when the program tries to link, because linking a cross-compiled Windows binary to a native Fedora library won't work.
Not setting --prefix
You likely don't want to install Windows binaries and libraries under/usr
or /usr/local
. For a start it's better to keep Windows things in one place, and the packaging guidelines have specified that place to be /usr/i686-pc-mingw32/sys-root/mingw
. But mainly it's not a good idea to mix up native and cross-compiled libraries, which will cause all sorts of problems as in the point above.If you use
%{_mingw32_configure}
in RPM specfiles, or the mingw32-configure
command, then paths will be set correctly for you.Not using a portability library
If you're writing the program yourself, or if you're doing the often difficult work of porting an existing application, use a portability library to help you. Which you choose is up to you and depends on many factors, but we would recommend that you look at these ones:Writing your own build system
While it's fashionable to dislikeautoconf
and m4 macros, it is still by far the easiest way to both build your program on multiple systems, and to cross-compile. So use autotools or cmake, and definitely don't write your own build system. Discourage other projects from writing their own build systems too.This really comes down to bitter experience. Every project we have had to port that has used its own build system has been far more of a headache than those that just used autoconf or cmake.
Running programs during the build process
When cross-compiling, it's always a mistake to run programs during essential build steps. The problem is that you can't be sure that binaries can be made to work in the build environment. For Windows binaries, there is some chance of running them under Wine, but Wine itself is incompatible with autobuild environments like mock and Koji. Furthermore Wine only works on x86 platforms, and it's not possible to use it at all when cross-compiling from other architectures like PPC.Running programs during
make test
is normal and useful though.
Wednesday, 19 November 2008
Egg & "Verified by Visa"
Message sent to Egg today about Verified by Visa:
Update (2008-11-20) — a dull form reply from Egg:
Er yes, thanks for nothing Emily. You don't mention the idiotic implementation or the fact that they are passing liability over to their customers. I'm cancelling my credit card and looking for a secure alternative.
Update (2008-11-24) — I can't believe it, the fuckers cancelled my credit card.
Dear Sir/Madam,
I would like to permanently opt out of "Verified By Visa" when making purchases online. It just moves the liability on to me and the technical implementation of it is frankly crap. If not, I'll cancel my card (I expect you'll be happy about that) since it's no longer useful for purchases.
If however you are going to introduce some scheme which is really secure, such as a hardware token or one-time credit card numbers or authorization by SMS message, then let me know.
Update (2008-11-20) — a dull form reply from Egg:
The Secure online code service is supported by Verified by Visa and MasterCard Secure Code. It protects your card with a password, giving you added security when you shop online.
When you make purchases online with participating retailers, you'll be presented with a receipt at the end of the checkout process. The receipt includes details of your purchase, showing retailer name, purchase amount and date. You sign the receipt using your personal password and click 'Submit' to proceed with the purchase. Without your password the purchase can't be completed.
This is a system that's been put in place by Visa and MasterCard. It's to provide a more secure service, when making purchases online.
Unfortunately, this isn't something we can remove from your Egg Card.
Thanks for your message.
Emily Stirling
Internet Customer Services
Er yes, thanks for nothing Emily. You don't mention the idiotic implementation or the fact that they are passing liability over to their customers. I'm cancelling my credit card and looking for a secure alternative.
Update (2008-11-24) — I can't believe it, the fuckers cancelled my credit card.
LWN.net has an interview with us about MinGW Windows cross-compiler
Here is the article link if you are an LWN subscriber:
http://lwn.net/Articles/307732/
If you're not an LWN subscriber, you can use this free link to get to the article:
http://lwn.net/SubscriberLink/307732/0efc7b75c5696ae5/
Please consider subscribing to LWN!
http://lwn.net/Articles/307732/
If you're not an LWN subscriber, you can use this free link to get to the article:
http://lwn.net/SubscriberLink/307732/0efc7b75c5696ae5/
Please consider subscribing to LWN!
Sunday, 9 November 2008
OCaml Users Meeting, Feb 2009, Grenoble
Sylvain is already organizing the next OCaml Users Meeting 4th Feb 2009 in Grenoble, France.
The last meeting (rubbish photo I took below) was a great success, and since so much has happened in the community this year, I expect this one will be even bigger and better.
Update: Sylvain's announcement and the official photo
The last meeting (rubbish photo I took below) was a great success, and since so much has happened in the community this year, I expect this one will be even bigger and better.
Update: Sylvain's announcement and the official photo
Sunday, 2 November 2008
malloc failures
I can't put a comment on Debarshi's post, so I'll answer here. Debarshi complains about this comment by the "inimitable" Jeff Johnson:
Another problem is that only about 1 in 10 memory allocations in a typical C program are mallocs. The rest are stack-allocated variables, and those aren't usually checked at all. If any of your 9 out of 10 stack allocations fail, your whole program fails hard.
This is the correct way to deal with those 1 in 10 memory allocations that you can check — provide a custom abort function that the main program can override in the very rare case that they can do anything useful other than exit:
Really the answer is to use a sensible programming language though. Programming languages invented before C had safer, faster memory allocation, dealt with 10 out of 10 memory allocation errors, and provided a mechanism to recover correctly. Those languages are now 30 years more advanced. In 2008 we're having these silly arguments about how to deal with malloc failures. That's a failure of ourselves as programmers.
You have to look at the usage case, malloc returning NULL is a "can't happen" condition where an exit call is arguably justified.
Returning an error from library to application when malloc returns NULL assumes:
1) error return paths exist [...]
2) applications are prepared to do something meaningful with the error
Another problem is that only about 1 in 10 memory allocations in a typical C program are mallocs. The rest are stack-allocated variables, and those aren't usually checked at all. If any of your 9 out of 10 stack allocations fail, your whole program fails hard.
This is the correct way to deal with those 1 in 10 memory allocations that you can check — provide a custom abort function that the main program can override in the very rare case that they can do anything useful other than exit:
Note that the main program can use longjmp (or exceptions in some cases) to "return" back to a safe point in the program, such as a transaction checkpoint. If the main program uses pool allocators — about the only safe and sensible way to deal with C's programming model — then the program has a chance of recovering.
void (*custom_abort) () = abort;
void
lib_set_custom_abort (void (*new_abort) ())
{
custom_abort = new_abort;
}
void *
lib_malloc (int n)
{
void *data = malloc (n);
if (data == NULL) custom_abort ();
return data;
}
Really the answer is to use a sensible programming language though. Programming languages invented before C had safer, faster memory allocation, dealt with 10 out of 10 memory allocation errors, and provided a mechanism to recover correctly. Those languages are now 30 years more advanced. In 2008 we're having these silly arguments about how to deal with malloc failures. That's a failure of ourselves as programmers.
Subscribe to:
Posts (Atom)