TWO INTERSECTING LOGICAL OR·U+2A56

Character Information

Code Point
U+2A56
HEX
2A56
Unicode Plane
Basic Multilingual Plane
Category
Math Symbol

Character Representations

Click elements to copy
EncodingHexBinary
UTF8
E2 A9 96
11100010 10101001 10010110
UTF16 (big Endian)
2A 56
00101010 01010110
UTF16 (little Endian)
56 2A
01010110 00101010
UTF32 (big Endian)
00 00 2A 56
00000000 00000000 00101010 01010110
UTF32 (little Endian)
56 2A 00 00
01010110 00101010 00000000 00000000
HTML Entity
⩖
URI Encoded
%E2%A9%96

Description

The Unicode character U+2A56, known as the "TWO INTERSECTING LOGICAL OR" symbol (⋈), plays a crucial role in digital text, particularly in mathematical and logical expressions. It is part of the "Mathematical Operators" block in the Unicode Standard, which encompasses symbols for mathematical functions and operations. This character is primarily used to denote an intersection operation between two sets or collections, representing the elements common to both sets. In digital text, U+2A56 is often employed in algorithmic expressions, set theory, and formal logic to convey complex relationships and dependencies within data structures. While this symbol does not have a direct cultural, linguistic, or technical context outside of its mathematical applications, it remains an essential tool for precision in logical and computational analysis.

How to type the symbol on Windows

Hold Alt and type 10838 on the numpad. Or use Character Map.

  1. Step 1: Determine the UTF-8 encoding bit layout

    The character has the Unicode code point U+2A56. In UTF-8, it is encoded using 3 bytes because its codepoint is in the range of 0x0800 to 0xffff.

    Therefore we know that the UTF-8 encoding will be done over 16 bits within the final 24 bits and that it will have the format: 1110xxxx 10xxxxxx 10xxxxxx
    Where the x are the payload bits.

    UTF-8 Encoding bit layout by codepoint range
    Codepoint RangeBytesBit patternPayload length
    U+0000 - U+007F10xxxxxxx7 bits
    U+0080 - U+07FF2110xxxxx 10xxxxxx11 bits
    U+0800 - U+FFFF31110xxxx 10xxxxxx 10xxxxxx16 bits
    U+10000 - U+10FFFF411110xxx 10xxxxxx 10xxxxxx 10xxxxxx21 bits
  2. Step 2: Obtain the payload bits:

    Convert the hexadecimal code point U+2A56 to binary: 00101010 01010110. Those are the payload bits.

  3. Step 3: Fill in the bits to match the bit pattern:

    Obtain the final bytes by arranging the paylod bits to match the bit layout:
    11100010 10101001 10010110