llama.cpp provides LLM inference in C/C++. The unsafe type member in the rpc_tensor structure can cause global-buffer-overflow. This vulnerability may lead to memory data leakage. The vulnerability is fixed in b3561.
type
rpc_tensor
global-buffer-overflow