DOUBLE SUBSET·U+22D0

Character Information

Code Point
U+22D0
HEX
22D0
Unicode Plane
Basic Multilingual Plane
Category
Math Symbol

Character Representations

Click elements to copy
EncodingHexBinary
UTF8
E2 8B 90
11100010 10001011 10010000
UTF16 (big Endian)
22 D0
00100010 11010000
UTF16 (little Endian)
D0 22
11010000 00100010
UTF32 (big Endian)
00 00 22 D0
00000000 00000000 00100010 11010000
UTF32 (little Endian)
D0 22 00 00
11010000 00100010 00000000 00000000
HTML Entity
⋐
URI Encoded
%E2%8B%90

Description

The Unicode character U+22D0 represents the "Double Subset" symbol (⊊). This mathematical symbol is typically used to denote the double subset relationship between two sets in digital text. In a technical context, it can be utilized in various mathematical and computational fields, such as computer science, logic, and set theory, where it helps to express the concept that one set is both a subset of another set and also contains an additional element not present in the other set. While there may not be any specific cultural or linguistic context associated with this symbol, its clear and precise usage in representing mathematical relationships makes it an important tool for conveying complex ideas accurately and concisely in digital text.

How to type the symbol on Windows

Hold Alt and type 8912 on the numpad. Or use Character Map.

  1. Step 1: Determine the UTF-8 encoding bit layout

    The character has the Unicode code point U+22D0. In UTF-8, it is encoded using 3 bytes because its codepoint is in the range of 0x0800 to 0xffff.

    Therefore we know that the UTF-8 encoding will be done over 16 bits within the final 24 bits and that it will have the format: 1110xxxx 10xxxxxx 10xxxxxx
    Where the x are the payload bits.

    UTF-8 Encoding bit layout by codepoint range
    Codepoint RangeBytesBit patternPayload length
    U+0000 - U+007F10xxxxxxx7 bits
    U+0080 - U+07FF2110xxxxx 10xxxxxx11 bits
    U+0800 - U+FFFF31110xxxx 10xxxxxx 10xxxxxx16 bits
    U+10000 - U+10FFFF411110xxx 10xxxxxx 10xxxxxx 10xxxxxx21 bits
  2. Step 2: Obtain the payload bits:

    Convert the hexadecimal code point U+22D0 to binary: 00100010 11010000. Those are the payload bits.

  3. Step 3: Fill in the bits to match the bit pattern:

    Obtain the final bytes by arranging the paylod bits to match the bit layout:
    11100010 10001011 10010000