I'm trying to come up with the most efficient algorithm to convert "ffffff" into 255,255,255, so i can call it with the color() function. I know how to do it the brute force way (Manually convert everything then do multiplication).. But i'm wondering if theres a better way to do it..
Dunno, if this helps, but someone made a library for this back in wc3: *click*; you could have a look at the algorithms. However, it might be just the same as you do it now, I did not look into it :)
Thanks for the hasty response :) will check it out.
edit: lolwut >_>
Vex is at a level beyond me. I've no idea whats going on there, but I made my own version. Just thought I'd share it here.
//Hexvaluesconststringc_Hex="0123456789abcdef";//Usedforcolorconversion//Hexcolortocolorcomponentinthc2c(strings){inti=StringFind(c_Hex,StringSub(s,1,1),false);intj=StringFind(c_Hex,StringSub(s,2,2),false);returnclampI((i-1)*16+(j-1),0,255);}colorGetHexColor(strings){intr=hc2c(StringSub(s,1,2));intg=hc2c(StringSub(s,3,4));intb=hc2c(StringSub(s,5,6));//Errorcheckif(StringLength(s)!=6){if(s!=""){//tDebug(xc("FF8888","Error: Invalid hex color @:\n"+s));}returnColor(255,255,255);}returnColor(r,g,b);}
Vex is at a level beyond me. I've no idea whats going on there, but I made my own version. Just thought I'd share it here.
Don't let him confuse you, he is just using very specific features of the language extension he created. The actual calculations seem to be pretty simple.
b+g*0x100+r*0x10000+a*0x1000000
this converts the a, r, g and b values in an integer, looking like this:
and this returns one specific color part, this would be red.
He multiplies the stored color integer (this) with 0x100 and divides it through 0x1000000 afterwards. For 0xAARRGGBB this leaves the RR.
He uses the int overflow to his advantage, it seems; integers can have exactly 0xFFFFFFFF different values.
He multiplys it with 0x100, triggering the overflow, however, due to the nature of the overflow, the next 2 digits will be on top (so 0xAARRGGBB * 0x100 will become 0xRRGGBB00), so dividing the result through 0x1000000 will give you the correct digits.
I just tested the overflow, and apparently there is an overflow check for galaxy at compile time,
inti=0x80000000;
causes a numeric overflow error, so to use Vexorian's function exactly the same way, you would need to use 0x7FFFFFFF and add 1...
inti=0x7FFFFFFF;i+=1;
works just fine.
This also means, one cannot easily use integers like Vex used in WC3, since its not possible to pass an integer > 0x7FFFFFFF to a function or set it to a constant (well, you could use like 0x7FFFFFFF*2+1 :P or just scratch alpha and only use 0xRRGGBB)
Well, but you use strings and not integers anyway.
Yeah, I've heard about that before. Its one of the caveats of Galaxy. Apparently when you do shift right logical.. It pads with 1's as well, so by default it makes the first bit a 1, and spits an error? (Not sure on this one)
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
I'm trying to come up with the most efficient algorithm to convert "ffffff" into 255,255,255, so i can call it with the color() function. I know how to do it the brute force way (Manually convert everything then do multiplication).. But i'm wondering if theres a better way to do it..
Any ideas?
Dunno, if this helps, but someone made a library for this back in wc3: *click*; you could have a look at the algorithms. However, it might be just the same as you do it now, I did not look into it :)
@Kueken531: Go
Thanks for the hasty response :) will check it out.
edit: lolwut >_>
Vex is at a level beyond me. I've no idea whats going on there, but I made my own version. Just thought I'd share it here.
Somehow I doubt, that functions like tDebug(), xc() and clampI() are natives ;) You might want to share, what they do.
Don't let him confuse you, he is just using very specific features of the language extension he created. The actual calculations seem to be pretty simple.
this converts the a, r, g and b values in an integer, looking like this:
Very straightforward.
and this returns one specific color part, this would be red.
He multiplies the stored color integer (this) with 0x100 and divides it through 0x1000000 afterwards. For 0xAARRGGBB this leaves the RR.
He uses the int overflow to his advantage, it seems; integers can have exactly 0xFFFFFFFF different values.
He multiplys it with 0x100, triggering the overflow, however, due to the nature of the overflow, the next 2 digits will be on top (so 0xAARRGGBB * 0x100 will become 0xRRGGBB00), so dividing the result through 0x1000000 will give you the correct digits.
@Kueken531: Go
Oh, they're just some custom functions of mine :P
I just tested the overflow, and apparently there is an overflow check for galaxy at compile time,
causes a numeric overflow error, so to use Vexorian's function exactly the same way, you would need to use 0x7FFFFFFF and add 1...
works just fine.
This also means, one cannot easily use integers like Vex used in WC3, since its not possible to pass an integer > 0x7FFFFFFF to a function or set it to a constant (well, you could use like 0x7FFFFFFF*2+1 :P or just scratch alpha and only use 0xRRGGBB)
Well, but you use strings and not integers anyway.
@Kueken531: Go
Yeah, I've heard about that before. Its one of the caveats of Galaxy. Apparently when you do shift right logical.. It pads with 1's as well, so by default it makes the first bit a 1, and spits an error? (Not sure on this one)