RUNIC LETTER DOTTED-L·U+16DB

Character Information

Code Point
U+16DB
HEX
16DB
Unicode Plane
Basic Multilingual Plane
Category
Other Letter

Character Representations

Click elements to copy
EncodingHexBinary
UTF8
E1 9B 9B
11100001 10011011 10011011
UTF16 (big Endian)
16 DB
00010110 11011011
UTF16 (little Endian)
DB 16
11011011 00010110
UTF32 (big Endian)
00 00 16 DB
00000000 00000000 00010110 11011011
UTF32 (little Endian)
DB 16 00 00
11011011 00010110 00000000 00000000
HTML Entity
ᛛ
URI Encoded
%E1%9B%9B

Description

The character U+16DB, also known as RUNIC LETTER DOTTED-L, holds a significant position in the realm of typography and Unicode. This character is specifically used in digital text to represent a distinct letter from the Runic alphabet, which was an ancient Germanic writing system. The Runic alphabet played a crucial role in preserving cultural and linguistic heritage, as it was utilized by various Germanic tribes and later adapted for various languages such as Old English, Old Norse, and Old High German. In its typical usage, the RUNIC LETTER DOTTED-L serves to differentiate between similar looking runes in text. The character is unique due to its distinct dot placed above the base letter "L" (U+16DD), which signifies its special phonetic properties in various Runic languages. This detail underlines the accuracy of digital representations, ensuring that the rich linguistic and cultural contexts are preserved in modern computing systems. In conclusion, the character U+16DB (RUNIC LETTER DOTTED-L) is a vital component in digital text that maintains the authenticity and diversity of the Runic alphabet. By including this character in digital typography, we can appreciate and study the historical significance and linguistic nuances of the ancient Germanic writing system while utilizing modern technology.

How to type the symbol on Windows

Hold Alt and type 5851 on the numpad. Or use Character Map.

  1. Step 1: Determine the UTF-8 encoding bit layout

    The character has the Unicode code point U+16DB. In UTF-8, it is encoded using 3 bytes because its codepoint is in the range of 0x0800 to 0xffff.

    Therefore we know that the UTF-8 encoding will be done over 16 bits within the final 24 bits and that it will have the format: 1110xxxx 10xxxxxx 10xxxxxx
    Where the x are the payload bits.

    UTF-8 Encoding bit layout by codepoint range
    Codepoint RangeBytesBit patternPayload length
    U+0000 - U+007F10xxxxxxx7 bits
    U+0080 - U+07FF2110xxxxx 10xxxxxx11 bits
    U+0800 - U+FFFF31110xxxx 10xxxxxx 10xxxxxx16 bits
    U+10000 - U+10FFFF411110xxx 10xxxxxx 10xxxxxx 10xxxxxx21 bits
  2. Step 2: Obtain the payload bits:

    Convert the hexadecimal code point U+16DB to binary: 00010110 11011011. Those are the payload bits.

  3. Step 3: Fill in the bits to match the bit pattern:

    Obtain the final bytes by arranging the paylod bits to match the bit layout:
    11100001 10011011 10011011