Lock and Atomic Operation Related Intrinsics

The prototypes for these intrinsics are in the ia64intrin.h header files.

Intrinsic

Description

unsigned __int64 _InterlockedExchange8(volatile unsigned char *Target, unsigned __int64 value)

Map to the xchg1 instruction. Atomically write the least significant byte of its 2nd argument to address specified by its 1st argument.

unsigned __int64 _InterlockedCompareExchange8_rel(volatile unsigned char *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Compare and exchange atomically the least significant byte at the address specified by its 1st argument. Maps to the cmpxchg1.rel instruction with appropriate setup.

unsigned __int64 _InterlockedCompareExchange8_acq(volatile unsigned char *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Same as the previous intrinsic, but using acquire semantic.

unsigned __int64 _InterlockedExchange16(volatile unsigned short *Target, unsigned __int64 value)

Map to the xchg2 instruction. Atomically write the least significant word of its 2nd argument to address specified by its 1st argument.

unsigned __int64 _InterlockedCompareExchange16_rel(volatile unsigned short *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Compare and exchange atomically the least significant word at the address specified by its 1st argument. Maps to the cmpxchg2.rel instruction with appropriate setup.

unsigned __int64 _InterlockedCompareExchange16_acq(volatile unsigned short *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Same as the previous intrinsic, but using acquire semantic.

int _InterlockedIncrement(volatile int *addend)

Atomically increment by one the value specified by its argument. Maps to the fetchadd4 instruction.

int _InterlockedDecrement(volatile int *addend)

Atomically decrement by one the value specified by its argument. Maps to the fetchadd4 instruction.

int _InterlockedExchange(volatile int *Target, long value)

Do an exchange operation atomically. Maps to the xchg4 instruction.

int _InterlockedCompareExchange(volatile int *Destination, int Exchange, int Comparand)

Do a compare and exchange operation atomically. Maps to the cmpxchg4 instruction with appropriate setup.

int _InterlockedExchangeAdd(volatile int *addend, int increment)

Use compare and exchange to do an atomic add of the increment value to the addend. Maps to a loop with the cmpxchg4 instruction to guarantee atomicity.

int _InterlockedAdd(volatile int *addend, int increment)

Same as the previous intrinsic, but returns new value, not the original one.

void * _InterlockedCompareExchangePointer(void * volatile *Destination, void *Exchange, void *Comparand)

Map the exch8 instruction; Atomically compare and exchange the pointer value specified by its first argument (all arguments are pointers)

unsigned __int64 _InterlockedExchangeU(volatile unsigned int *Target, unsigned __int64 value)

Atomically exchange the 32-bit quantity specified by the 1st argument. Maps to the xchg4 instruction.

unsigned __int64 _InterlockedCompareExchange_rel(volatile unsigned int *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Maps to the cmpxchg4.rel instruction with appropriate setup. Atomically compare and exchange the value specified by the first argument (a 64-bit pointer).

unsigned __int64 _InterlockedCompareExchange_acq(volatile unsigned int *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Same as the previous intrinsic, but map the cmpxchg4.acq instruction.

void _ReleaseSpinLock(volatile int *x)

Release spin lock.

__int64 _InterlockedIncrement64(volatile __int64 *addend)

Increment by one the value specified by its argument. Maps to the fetchadd instruction.

__int64 _InterlockedDecrement64(volatile __int64 *addend)

Decrement by one the value specified by its argument. Maps to the fetchadd instruction.

__int64 _InterlockedExchange64(volatile __int64 *Target, __int64 value)

Do an exchange operation atomically. Maps to the xchg instruction.

unsigned __int64 _InterlockedExchangeU64(volatile unsigned __int64 *Target, unsigned __int64 value)

Same as InterlockedExchange64 (for unsigned quantities).

unsigned __int64 _InterlockedCompareExchange64_rel(volatile unsigned __int64 *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Maps to the cmpxchg.rel instruction with appropriate setup. Atomically compare and exchange the value specified by the first argument (a 64-bit pointer).

unsigned __int64 _InterlockedCompareExchange64_acq(volatile unsigned __int64 *Destination, unsigned __int64 Exchange, unsigned __int64 Comparand)

Maps to the cmpxchg.acq instruction with appropriate setup. Atomically compare and exchange the value specified by the first argument (a 64-bit pointer).

__int64 _InterlockedCompareExchange64(volatile __int64 *Destination, __int64 Exchange, __int64 Comparand)

Same as the previous intrinsic for signed quantities.

__int64 _InterlockedExchangeAdd64(volatile __int64 *addend, __int64 increment)

Use compare and exchange to do an atomic add of the increment value to the addend. Maps to a loop with the cmpxchg instruction to guarantee atomicity

__int64 _InterlockedAdd64(volatile __int64 *addend, __int64 increment);

Same as the previous intrinsic, but returns the new value, not the original value. See Note.

Note iconNote

_InterlockedSub64 is provided as a macro definition based on _InterlockedAdd64.

#define _InterlockedSub64(target, incr) _InterlockedAdd64((target),(-(incr)))

Uses cmpxchg to do an atomic sub of the incr value to the target. Maps to a loop with the cmpxchg instruction to guarantee atomicity.