June 26, 2024, 01:19:37 AM

News:

Own IWBasic 2.x ? -----> Get your free upgrade to 3.x now.........


String operations

Started by Parker, January 28, 2006, 12:42:33 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Parker

Say I have an app that I want to use ASCII and UTF-16 strings in. So I declare the functions required
declare import, AnAsciiFunction(string str);
declare import, AUnicodeFunction(string str);

but... when I call them, how will the compiler know which to use?
There are lots of solutions, an extra type astring or wstring, or by declaring the parameters as byte[] or word[]. But I'd like to know how situations like this will be handled when the compiler can't guess which type you're using. Especially for string literals. Or will there be something like C's L"" syntax?

Personally I'm still in favor of the types astring and wstring that let you choose what the function takes and are more obvious to the programmer. But I'm sure something will be worked out. I'd just like to know what ;)

Ionic Wind Support Team

The STRING type in Aurora is passed by reference, that is to say the compiler doesn't care what is in the string, whether it is composed of bytes or words, it just passes and address to either function.

Ionic Wind Support Team

Parker

I think I didn't word the question right. If I'm calling the above functions
AnAsciiFunction("Hello");
AUnicodeFunction("Hello");

there's no way the compiler can tell whether or not it should generate a unicode string for each function. If you used the word[] type (assuming it was allowed), it would know to convert. If we had to ourselves type L"Hello" it would know to pass a unicode string.

Personally I'd like the compiler to do the conversion for me ;) but if it doesn't I can convert back from the default string type by using one of the CCL functions or the Aurora string lib functions when they are made.

Mike Stefanik

It's something that I'd imagine will ultimately be a compiler switch (whether literals are Unicode or not) which would also define something like "UNICODE" so you could use #ifdef with it. Less attractive would be adopting the tchar.h approach that C/C++ uses where every literal string is specified using the _T macro; it works, but makes the code look ugly.

The fundamental issue with Unicode support is that on Windows 9x, most function calls inherently use ANSI. A Unicode build of an executable has extra overhead at runtime because all of those Unicode calls need to be converted to their ANSI equivalent. It's further complicated by the fact that unless the end-user has the Layer for Unicode installed, not all of the Unicode functions in the Win32 API are available.

The reverse is true on Windows NT/2K/XP where all function calls inherently use Unicode and program a built using ANSI calls has extra overhead at runtime because those calls need to be converted to their Unicode equivalent.

In the end, unless the developer is absolutely certain that they only want to support the NT family of platforms, they typically go with the least-common-denominator approach and release ANSI builds. Rarely I've seen packages include two different builds and the correct build is installed based on the platform, but that has the potential to be a real support headache.
Mike Stefanik
www.catalyst.com
Catalyst Development Corporation

Ionic Wind Support Team

Right.  We have the "ALIAS" keyword which when combined with #ifdef solves your issue.

#ifdef UNICODE
import TheFunction ALIAS TheFunctionW(string str);
#else
import TheFunction ALIAS TheFunctionA(string str);
#endif

Which works now, and is pretty much how C does it.

Paul.
Ionic Wind Support Team

Parker

But this means we have to manually convert strings? It's not too much of a problem. I prefer to use unicode even if it isn't supported on 9x systems, it's more flexible.

I like "import" better than __declspec(dllimport) ;)