9

I create the following for truncating a string in java to a new string with a given number of bytes.

        String truncatedValue = "";
        String currentValue = string;
        int pivotIndex = (int) Math.round(((double) string.length())/2);
        while(!truncatedValue.equals(currentValue)){
            currentValue = string.substring(0,pivotIndex);
            byte[] bytes = null;
            bytes = currentValue.getBytes(encoding);
            if(bytes==null){
                return string;
            }
            int byteLength = bytes.length;
            int newIndex =  (int) Math.round(((double) pivotIndex)/2);
            if(byteLength > maxBytesLength){
                pivotIndex = newIndex;
            } else if(byteLength < maxBytesLength){
                pivotIndex = pivotIndex + 1;
            } else {
                truncatedValue = currentValue;
            }
        }
        return truncatedValue;

This is the first thing that came to my mind, and I know I could improve on it. I saw another post that was asking a similar question there, but they were truncating Strings using the bytes instead of String.substring. I think I would rather use String.substring in my case.

EDIT: I just removed the UTF8 reference because I would rather be able to do this for different storage types as well.

Vasil Lukach
  • 3,658
  • 3
  • 31
  • 40
stevebot
  • 23,275
  • 29
  • 119
  • 181
  • I would rephrase your problem. You are trying to fit a string into a byte array that cannot be larger than maxUTF8BytesLength. You want to use UTF-8 for the encoding. You want to copy as much character as possible. Correct? – gawi Aug 26 '10 at 15:51
  • right, I would say that is correct. I also would like to do it efficiently. – stevebot Aug 26 '10 at 16:04
  • I just edited the question to not reference UTF-8. Sorry about that, it was misleading. – stevebot Aug 26 '10 at 16:09

12 Answers12

14

Why not convert to bytes and walk forward--obeying UTF8 character boundaries as you do it--until you've got the max number, then convert those bytes back into a string?

Or you could just cut the original string if you keep track of where the cut should occur:

// Assuming that Java will always produce valid UTF8 from a string, so no error checking!
// (Is this always true, I wonder?)
public class UTF8Cutter {
  public static String cut(String s, int n) {
    byte[] utf8 = s.getBytes();
    if (utf8.length < n) n = utf8.length;
    int n16 = 0;
    int advance = 1;
    int i = 0;
    while (i < n) {
      advance = 1;
      if ((utf8[i] & 0x80) == 0) i += 1;
      else if ((utf8[i] & 0xE0) == 0xC0) i += 2;
      else if ((utf8[i] & 0xF0) == 0xE0) i += 3;
      else { i += 4; advance = 2; }
      if (i <= n) n16 += advance;
    }
    return s.substring(0,n16);
  }
}

Note: edited to fix bugs on 2014-08-25

Rex Kerr
  • 166,841
  • 26
  • 322
  • 407
  • 1
    I definitely could do that. Is there any reason why using String.substring is any worse? It seems like doing it the way you describe would have to account for all the code points, which isn't a whole lot of fun. (depending on your definition of fun :) ). – stevebot Aug 26 '10 at 16:04
  • @stevebot - To be efficient, you need to take advantage of the known structure of the data. If you don't care about efficiency and want it to be easy, or you want to support every possible Java encoding without having to know what it is, your method seems reasonable enough. – Rex Kerr Aug 26 '10 at 16:22
  • 1
    Wouldn’t it be even more efficient to iterate over the String’s characters and predict their encoded length, instead of encoding the entire string, to iterate over the encoded bytes and reconstitute their character association? Similar to [this](https://stackoverflow.com/a/21766744/2711488), just with non-BMP character support and counting before doing `substring` like in your answer… – Holger Mar 11 '20 at 09:21
8

The more sane solution is using decoder:

final Charset CHARSET = Charset.forName("UTF-8"); // or any other charset
final byte[] bytes = inputString.getBytes(CHARSET);
final CharsetDecoder decoder = CHARSET.newDecoder();
decoder.onMalformedInput(CodingErrorAction.IGNORE);
decoder.reset();
final CharBuffer decoded = decoder.decode(ByteBuffer.wrap(bytes, 0, limit));
final String outputString = decoded.toString();
kan
  • 28,279
  • 7
  • 71
  • 101
  • 2
    Cutting at an arbitrary byte index may create invalid encoded data, as a single character may use multiple bytes (especially with UTF-8). Worse, with other encodings it might produce wrong valid characters, which are not ignored. You could easily avoid this by first allocating a `ByteBuffer` with the desired size, then use it with a `CharsetEncoder`, which will automatically encode only as many valid characters as fit into the buffer, then decode the buffer to a `String`. Similar approach, but without the bug and even more efficient, as it won’t encode character beyond the intended limit. – Holger Mar 11 '20 at 09:08
  • 1
    See [this answer](https://stackoverflow.com/a/21765851/2711488). It does even eliminate the decoding step. – Holger Mar 11 '20 at 09:14
  • @Holger My solution ignores truncated multibyte chars by `CodingErrorAction.IGNORE`. So it works fine. I am interested to see an example when it fails. However I agree, your solution looks neater and could be more performant. – kan Jul 06 '21 at 08:17
  • 2
    Yes, for UTF-8 using CodingErrorAction.IGNORE will do the right thing. But the OP said “I would rather be able to do this for different storage types aswell” and for other encodings, tearing multibyte sequences apart may result in valid (but wrong) characters. – Holger Jul 06 '21 at 08:50
5

I think Rex Kerr's solution has 2 bugs.

  • First, it will truncate to limit+1 if a non-ASCII character is just before the limit. Truncating "123456789á1" will result in "123456789á" which is represented in 11 characters in UTF-8.
  • Second, I think he misinterpreted the UTF standard. https://en.wikipedia.org/wiki/UTF-8#Description shows that a 110xxxxx at the beginning of a UTF sequence tells us that the representation is 2 characters long (as opposed to 3). That's the reason his implementation usually doesn't use up all available space (as Nissim Avitan noted).

Please find my corrected version below:

public String cut(String s, int charLimit) throws UnsupportedEncodingException {
    byte[] utf8 = s.getBytes("UTF-8");
    if (utf8.length <= charLimit) {
        return s;
    }
    int n16 = 0;
    boolean extraLong = false;
    int i = 0;
    while (i < charLimit) {
        // Unicode characters above U+FFFF need 2 words in utf16
        extraLong = ((utf8[i] & 0xF0) == 0xF0);
        if ((utf8[i] & 0x80) == 0) {
            i += 1;
        } else {
            int b = utf8[i];
            while ((b & 0x80) > 0) {
                ++i;
                b = b << 1;
            }
        }
        if (i <= charLimit) {
            n16 += (extraLong) ? 2 : 1;
        }
    }
    return s.substring(0, n16);
}

I still thought this was far from effective. So if you don't really need the String representation of the result and the byte array will do, you can use this:

private byte[] cutToBytes(String s, int charLimit) throws UnsupportedEncodingException {
    byte[] utf8 = s.getBytes("UTF-8");
    if (utf8.length <= charLimit) {
        return utf8;
    }
    if ((utf8[charLimit] & 0x80) == 0) {
        // the limit doesn't cut an UTF-8 sequence
        return Arrays.copyOf(utf8, charLimit);
    }
    int i = 0;
    while ((utf8[charLimit-i-1] & 0x80) > 0 && (utf8[charLimit-i-1] & 0x40) == 0) {
        ++i;
    }
    if ((utf8[charLimit-i-1] & 0x80) > 0) {
        // we have to skip the starter UTF-8 byte
        return Arrays.copyOf(utf8, charLimit-i-1);
    } else {
        // we passed all UTF-8 bytes
        return Arrays.copyOf(utf8, charLimit-i);
    }
}

Funny thing is that with a realistic 20-500 byte limit they perform pretty much the same IF you create a string from the byte array again.

Please note that both methods assume a valid utf-8 input which is a valid assumption after using Java's getBytes() function.

Zsolt Taskai
  • 51
  • 1
  • 3
  • You should also catch UnsupportedEncodingException at s.getBytes("UTF-8") – asalamon74 May 19 '15 at 10:04
  • I don't see getBytes throwing anything. Although http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes%28java.lang.String%29 says "The behavior of this method when this string cannot be encoded in the given charset is unspecified." – Zsolt Taskai Aug 29 '15 at 00:25
  • 1
    The page you linked shows that it throws UnsupportedEncodingException: "public byte[] getBytes(String charsetName) throws UnsupportedEncodingException" – asalamon74 Aug 29 '15 at 18:45
  • Thanks! Strange, I don't know what version I used when I posted this solution 2 years ago. Updating the code above. – Zsolt Taskai Sep 17 '15 at 23:27
  • Instead of providing the encoding name as a String you can use the Charset constants from StandardCharsets class because the String#getBytes(Charset charset) method does not throw UnsupportedEncodingException. – Pikachu Mar 24 '17 at 17:32
4
String s = "FOOBAR";

int limit = 3;
s = new String(s.getBytes(), 0, limit);

Result value of s:

FOO
Ilya Lysenko
  • 1,772
  • 15
  • 24
  • When the MAX_LENGTH interrupts the byte array in the middle of a multi-byte sequence, then the resulting string ends with a "?". Example: `s = "ää";` `MAX_LENGTH = 3;` result: `"ä?"` Given the simplicity of this code, however maybe in some situations this might be an option. – Martin Rust Jul 30 '20 at 13:41
  • correct my comment: `MAX_LENGTH = 5` (why does the solution use `MAX_LENGTH - 2`?) Also note that as of Java 1.6, `"UTF-8"` should be replaced by `StandardCharsets.UTF_8`. – Martin Rust Jul 30 '20 at 13:49
3

Use the UTF-8 CharsetEncoder, and encode until the output ByteBuffer contains as many bytes as you are willing to take, by looking for CoderResult.OVERFLOW.

bmargulies
  • 97,814
  • 39
  • 186
  • 310
2

As noted, Peter Lawrey solution has major performance disadvantage (~3,500msc for 10,000 times), Rex Kerr was much better (~500msc for 10,000 times) but the result not was accurate - it cut much more than it needed (instead of remaining 4000 bytes it remainds 3500 for some example). attached here my solution (~250msc for 10,000 times) assuming that UTF-8 max length char in bytes is 4 (thanks WikiPedia):

public static String cutWord (String word, int dbLimit) throws UnsupportedEncodingException{
    double MAX_UTF8_CHAR_LENGTH = 4.0;
    if(word.length()>dbLimit){
        word = word.substring(0, dbLimit);
    }
    if(word.length() > dbLimit/MAX_UTF8_CHAR_LENGTH){
        int residual=word.getBytes("UTF-8").length-dbLimit;
        if(residual>0){
            int tempResidual = residual,start, end = word.length();
            while(tempResidual > 0){
                start = end-((int) Math.ceil((double)tempResidual/MAX_UTF8_CHAR_LENGTH));
                tempResidual = tempResidual - word.substring(start,end).getBytes("UTF-8").length;
                end=start;
            }
            word = word.substring(0, end);
        }
    }
    return word;
}
  • Doesn't look like this solution prevents a trailing half surrogate pair? Second, in case getBytes().length would happen to be applied to both halves of a surrogate pair individually (not immediately obvious to me it never will), it'd also underestimate the the size of the UTF-8 representation of the pair as a whole, assuming the "replacement byte array" is a single byte. Third, the 4-byte UTF-8 code points all require a two-char surrogate pair in Java, so effectively the max is just 3 bytes per Java character. – Stefan L Feb 16 '13 at 23:33
1

you could convert the string to bytes and convert just those bytes back to a string.

public static String substring(String text, int maxBytes) {
   StringBuilder ret = new StringBuilder();
   for(int i = 0;i < text.length(); i++) {
       // works out how many bytes a character takes, 
       // and removes these from the total allowed.
       if((maxBytes -= text.substring(i, i+1).getBytes().length) < 0) break;
       ret.append(text.charAt(i));
   }
   return ret.toString();
}
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • 2
    @nguyendat, there is lots of reasons this is not very performant. The main one would be the object creation for the substring() and getBytes() However, you would be surprised how much you can do in a milli-second and that is usually enough. – Peter Lawrey Dec 17 '10 at 11:46
  • 1
    That method doesn't handle surrogate pairs properly, e.g. substring("\uD800\uDF30\uD800\uDF30", 4).getBytes("UTF-8").length will return 8, not 4. Half a surrogate pair is represented as a single-byte "?" by String.getBytes("UTF-8"). – Stefan L Feb 17 '13 at 00:14
  • @StefanL I posted a variant of this answer [here](http://stackoverflow.com/a/41071240/2599133) which should handle surrogate pairs properly. – Hans Brende Dec 10 '16 at 01:31
0

This is my :

private static final int FIELD_MAX = 2000;
private static final Charset CHARSET =  Charset.forName("UTF-8"); 

public String trancStatus(String status) {

    if (status != null && (status.getBytes(CHARSET).length > FIELD_MAX)) {
        int maxLength = FIELD_MAX;

        int left = 0, right = status.length();
        int index = 0, bytes = 0, sizeNextChar = 0;

        while (bytes != maxLength && (bytes > maxLength || (bytes + sizeNextChar < maxLength))) {

            index = left + (right - left) / 2;

            bytes = status.substring(0, index).getBytes(CHARSET).length;
            sizeNextChar = String.valueOf(status.charAt(index + 1)).getBytes(CHARSET).length;

            if (bytes < maxLength) {
                left = index - 1;
            } else {
                right = index + 1;
            }
        }

        return status.substring(0, index);

    } else {
        return status;
    }
}
Matt McMinn
  • 15,983
  • 17
  • 62
  • 77
0

By using below Regular Expression also you can remove leading and trailing white space of double byte character.

stringtoConvert = stringtoConvert.replaceAll("^[\\s ]*", "").replaceAll("[\\s ]*$", "");
Baby Groot
  • 4,637
  • 39
  • 52
  • 71
0

This one could not be the more efficient solution but works

public static String substring(String s, int byteLimit) {
    if (s.getBytes().length <= byteLimit) {
        return s;
    }

    int n = Math.min(byteLimit-1, s.length()-1);
    do {
        s = s.substring(0, n--);
    } while (s.getBytes().length > byteLimit);

    return s;
}
0

I've improved upon Peter Lawrey's solution to accurately handle surrogate pairs. In addition, I optimized based on the fact that the maximum number of bytes per char in UTF-8 encoding is 3.

public static String substring(String text, int maxBytes) {
    for (int i = 0, len = text.length(); (len - i) * 3 > maxBytes;) {
        int j = text.offsetByCodePoints(i, 1);
        if ((maxBytes -= text.substring(i, j).getBytes(StandardCharsets.UTF_8).length) < 0)  
            return text.substring(0, i);
        i = j;
    }
    return text;
}
Hans Brende
  • 7,847
  • 4
  • 37
  • 44
0

Binary search approach in scala:

private def bytes(s: String) = s.getBytes("UTF-8")

def truncateToByteLength(string: String, length: Int): String =
  if (length <= 0 || string.isEmpty) ""
  else {
    @tailrec
    def loop(badLen: Int, goodLen: Int, good: String): String = {
      assert(badLen > goodLen, s"""badLen is $badLen but goodLen is $goodLen ("$good")""")
      if (badLen == goodLen + 1) good
      else {
        val mid = goodLen + (badLen - goodLen) / 2
        val midStr = string.take(mid)
        if (bytes(midStr).length > length)
          loop(mid, goodLen, good)
        else
          loop(badLen, mid, midStr)
      }
    }

    loop(string.length * 2, 0, "")
  }
nafg
  • 2,424
  • 27
  • 25