# Why is INFINITY - INFINITY calculated to -NAN?

I have a C-function which tests that an x - x expression, where x is a double, is not reduced to 0.

Why, when x is INFINITY, does x - x give the wrong result -NAN (negative) instead of NAN (positive)?

/******************************************************************************************************************************
* Test that the compiler does not optimise x-x to zero. Also test that INF-INF is correctly calculated.
* The calculation expression is regarded highly prone to be reduced to zero by a sloppy compiler.
* Returns 0x1 if successful. Otherwise an even value less than 0xf is returned.
******************************************************************************************************************************/
static unsigned int
infInfSubtraction(void)
{
unsigned int res = 0x0u;

const unsigned long long pattInf  = 0x7ff0000000000000uLL;
const unsigned long long pattNan  = 0x7ff8000000000000uLL;
const unsigned long long pattZero = 0x0000000000000000uLL;

const double x = *(double*)&pattInf;

const double XsubX = x - x;

if (*(unsigned long long*)&XsubX == pattNan)
{
res = 0x1u;   // Correct subtraction INFINITY - INFINITY = NAN
}
else if (*(unsigned long long*)&XsubX == pattZero)
{
res = 0x2u;   // Incorrect subtraction INFINITY - INFINITY = 0
}
else
{
res = 0xeu;
}

return res;
}   // infInfSubtraction()