Oracle大面积出现ORA-600 [kole_t2u],  报错的问题处理及详细分析过程
[color=navy]一般情况下引起此问题的可能原因：oracle在支持varchar2类型的数据时，健壮性较强，可以识别特殊字符，在处理CLOB类型，解析脚本时，因某些特殊的字符而造成解析成的可执行语句出现异常，从而引起ORA-00600: internal error code, arguments: [kole_t2u], , , , , , , 。
以下是官方的针对ORA-600 [kole_t2u], 报错的解释：
The meaning of ORA-600 [kole_t2u], 
The error ORA-600 [kole_t2u],  (note that both arguments are important here) is closely related to the more normal looking error ORA-29275.
Both errors can only come up in a database that is using multibyte character sets (ie, a character set in which at least some characters are represented by more than 1 byte), and both mean that a malformed byte sequence has been found which cannot represent a character.
ORA-29275 comes up in case this occurs for normal VARCHAR2 data. ORA-600 [kole_t2u],  is thrown under similar circumstances, but (usually) when CLOB data is being used.
[color=darkred]1: Invalid multibyte data being inserted into a CLOB
This is the simplest of occurrences of this error. It can be immediately spotted because the error will be raised as a direct result of the statement that tries to load the incorrect data into the CLOB. Example 1 above is an example of this type.
If the cause of this problem is that the application is indeed pushing incorrect data into the CLOB, then this is a application error that needs to be correct.
Alternatively this type of problem could be the result of encrypted data trying to be loaded into a CLOB. In this case the problem is not with the data, but with the fact a binary string is trying to be loaded in a CLOB. To resolve this the data model needs to be adapted and the data should be loaded in a RAW or BLOB column.
See point B.10) in Note 788156.1 AL32UTF8 / UTF8 (Unicode) Database Character Set Implications
[color=darkred]2: Invalid multibyte data copied from VARCHAR2 into CLOB
This type of issue can often look like a bug, but is in actual fact related to the fact that incomplete codepoints are already used in the database. Example 2 above shows how this data might be inserted into a VARCHAR2 column. If this data is subsequently moved to a CLOB column, then this error will come up.
A relatively easy way of detecting this sort of data would be to select the data from the VARCHAR2 column, in which case a ORA-29275 error would be expected as well. However, there could be more complex cases in which data is processed in the application or in PL/SQL first before being attempted to be loaded into the CLOB. If this processing causes incomplete codepoints to appear then we would also see ORA-600 [kole_t2u], , but this could not be detected from the base data.
Typically this type of the error can be seen when extended database auditing is used, and the database has invalid multibyte data stored. This could for example be as a result of wrongly implemented encryption (see paragraph above).
In this scenario the extended auditing will cause bind values used in SQL statements to be written to a auditing record, which uses a CLOB column for this purpose. If the bind values are fetched from the invalid multibyte data, the copy to the CLOB which is performed by the auditing system will fail with ORA-600 [kole_t2u], . At first sight this will look like a bug in the auditing system, but the core problem is that invalid data is stored in the first place, and this needs to be addressed.
Other mechanisms (either Oracle provided, or application based) which copy data between VARCHAR2 and CLOB columns could run into the same problems.
As per the above example, this case represents a problem in the stored data in the database, and this needs to be addressed. The fact that the "copying process" runs into the ORA-600 is simply a result of the underlying problems.
[color=darkred]3: Incorrect CLOB splits
This type of problem is the hardest to detect, and usually requires extensive debugging before the problem can be located, before going down this path it is usually preferable to go through the list of known issues and apply any known patches to rule out any known problems.
The background of this type of error comes from the fact that CLOB data is sometimes split into more manageable chunks of data of a certain length. It this split is made after a certain number of bytes (for example 1000, or 4000, etc etc), then it could happen that the split happens in the middle of a multibyte codepoint. This then leaves the previous chunk with a incomplete codepoint at the end of the data, and this error can be expected. If this type of problem occurs, it is therefore due to a bug in the way CLOB data is split into chunks. Rather than making the split based on bytes it should always be made based on full characters.
If this split is made in an application (for processing on the application side), then this represents a application bug that needs to be corrected. There are also a number of known bugs in the Oracle database that can cause this to happen, and anybody who suspects they might run into this is advised to apply the relevant patches.
[color=darkred]Cause type-4: Other Oracle "internal" issues
Other than the Oracle bugs mentioned above there is a further issue which could cause this error:
* Bug 7378401 & Bug 8216864 - both fixed in 10.2.0.5 and 188.8.131.52
Bug 7378401 can cause this error to occur in the Oracle auditing system when the AUDIT_TRAIL parameter is set to "db_extended", and there are bind vaiables of the NCHAR datatype. In order for this bug to be hit, the bind variables have to be of the NCHAR datatype. If the datatype is a normal CHAR or VARCHAR, then this bug can not be suspected.
Note that the fix for bug 7378401 exposes a new bug 8216864 causing the same error code. Patches for both bugs should therefore be applied at the same time.
* Bug 10334711 - update statement encounter ora-600 [kole_t2u] While auditing sql bind data ORA-600[KOLE_T2U] is raised when a multibyte character starts at 4000th byte of the character string, this bug could be suspected.
Workaround: Do not use EXTENDED feature of AUDIT_TRAIL
Fixed in 184.108.40.206 and 12.1