ARABIC LETTER HAH WITH TWO DOTS ABOVE·U+0757

ݗ

Character Information

Code Point
U+0757
HEX
0757
Unicode Plane
Basic Multilingual Plane
Category
Other Letter

Character Representations

Click elements to copy
EncodingHexBinary
UTF8
DD 97
11011101 10010111
UTF16 (big Endian)
07 57
00000111 01010111
UTF16 (little Endian)
57 07
01010111 00000111
UTF32 (big Endian)
00 00 07 57
00000000 00000000 00000111 01010111
UTF32 (little Endian)
57 07 00 00
01010111 00000111 00000000 00000000
HTML Entity
ݗ
URI Encoded
%DD%97

Description

The Unicode character U+0757, known as the Arabic Letter Hah with Two Dots Above, holds significant importance in digital text, particularly within the realm of typography and linguistics. It is a key component of the Arabic script system, representing the letter 'H' or 'Ha'. This character is primarily used in written Arabic to indicate the pronunciation of the consonant 'h' or 'ħ', and it is considered one of the 28 letters of the Arabic alphabet. In digital text, U+0757 serves as a glyph that accurately represents this specific letter, enabling accurate communication across various platforms and devices. Its usage in digital typography ensures cultural and linguistic fidelity in the representation of written Arabic. The character is also essential for computer programs, operating systems, and applications designed to process and display Arabic text. It is crucial for maintaining the integrity of language and culture in a rapidly evolving digital landscape, as it helps ensure that the nuances of spoken and written languages are preserved and communicated accurately across diverse platforms and devices.

How to type the ݗ symbol on Windows

Hold Alt and type 1879 on the numpad. Or use Character Map.

  1. Step 1: Determine the UTF-8 encoding bit layout

    The character ݗ has the Unicode code point U+0757. In UTF-8, it is encoded using 2 bytes because its codepoint is in the range of 0x0080 to 0x07ff.

    Therefore we know that the UTF-8 encoding will be done over 11 bits within the final 16 bits and that it will have the format: 110xxxxx 10xxxxxx
    Where the x are the payload bits.

    UTF-8 Encoding bit layout by codepoint range
    Codepoint RangeBytesBit patternPayload length
    U+0000 - U+007F10xxxxxxx7 bits
    U+0080 - U+07FF2110xxxxx 10xxxxxx11 bits
    U+0800 - U+FFFF31110xxxx 10xxxxxx 10xxxxxx16 bits
    U+10000 - U+10FFFF411110xxx 10xxxxxx 10xxxxxx 10xxxxxx21 bits
  2. Step 2: Obtain the payload bits:

    Convert the hexadecimal code point U+0757 to binary: 00000111 01010111. Those are the payload bits.

  3. Step 3: Fill in the bits to match the bit pattern:

    Obtain the final bytes by arranging the paylod bits to match the bit layout:
    11011101 10010111