These functions help you respond to web pages that declare incorrect
encodings. You can use guess_encoding
to figure out what
the real encoding is (and then supply that to the encoding
argument of
html), or use repair_encoding
to fix character vectors after the
fact.
guess_encoding(x) repair_encoding(x, from = NULL)
x | A character vector. |
---|---|
from | The encoding that the string is actually in. If |
These function are wrappers around tools from the fantastic stringi package, so you'll need to make sure to have that installed.
# A file with bad encoding included in the package path <- system.file("html-ex", "bad-encoding.html", package = "rvest") x <- read_html(path) x %>% html_nodes("p") %>% html_text()#> [1] "\xc9migré cause célèbre déjà vu."guess_encoding(x)#> encoding language confidence #> 1 ISO-8859-1 fr 0.31 #> 2 ISO-8859-2 ro 0.22 #> 3 UTF-16BE 0.10 #> 4 UTF-16LE 0.10 #> 5 GB18030 zh 0.10 #> 6 Big5 zh 0.10 #> 7 ISO-8859-9 tr 0.06 #> 8 IBM424_rtl he 0.01 #> 9 IBM424_ltr he 0.01# Two valid encodings, only one of which is correct read_html(path, encoding = "ISO-8859-1") %>% html_nodes("p") %>% html_text()#> [1] "Émigré cause célèbre déjà vu."#> [1] "Émigré cause célčbre déjŕ vu."