Skip to content

[Feature Request] A model with dynamic input and dynamic output。 will have a memory leak after inference with Openvino. #24162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ZzhangYyan opened this issue Mar 25, 2025 · 1 comment
Labels
ep:OpenVINO issues related to OpenVINO execution provider

Comments

@ZzhangYyan
Copy link

ZzhangYyan commented Mar 25, 2025

Describe the feature request

Application: Lightglue model for image registration, the input is a fixed-size image, the output is a dynamic shape. Openvino has a memory leak, I hope to solve this problem

Model source:https://github.com/fabio-sim/LightGlue-ONNX/releases/tag/v1.0.0

Openvino version is 2024.3.0.

code:
std::vector ApplyTransform1(const cv::Mat& image, float& mean, float& std)
{
cv::Mat resized, floatImage;
image.convertTo(floatImage, CV_32FC1);
mean = 0.0f;
std = 0.0f;
cv::Scalar meanScalar, stdScalar;
meanStdDev(floatImage, meanScalar, stdScalar);
mean = static_cast(meanScalar.val[0]);
std = static_cast(stdScalar.val[0]);

std::vector<float> imgData;
for (int h = 0; h < image.rows; h++)
{
    for (int w = 0; w < image.cols; w++)
    {
        imgData.push_back((floatImage.at<float>(h, w) - mean) / std);
    }
}
return imgData;

}

void MatchEnd2EndInf::SetImgInTensor(const cv::Mat& img, ov::Tensor& tensor)
{
std::vector imgData;
cv::Mat grayImg;
float mean, std;

if (img.channels() == 3)
{
    cv::cvtColor(img, grayImg, cv::COLOR_BGR2GRAY);
}
else
{
    /*      grayImg.copyTo(img);*/
    grayImg = img.clone();
}

imgData = ApplyTransform1(grayImg, mean, std);

float* data = tensor.data<float>();
for (size_t i = 0; i < imgData.size(); i++)
{
    data[i] = imgData[i];
}

}

void MatchEnd2EndInf::Infer(const cv::Mat& baseImg, const cv::Mat& testImg, cv::detail::MatchesInfo& matches_info)
{
std::chrono::steady_clock::time_point startTime;
std::chrono::steady_clock::time_point endTime;
std::chrono::duration<double, std::milli> duration;
int nElapsedTime;
int hh = 0, ww = 0;
ov::Shape shape1 = ov::Shape(4);
ov::Shape shape2 = ov::Shape(4);

hh = baseImg.rows; ww = baseImg.cols;
shape1[0] = 1;
shape1[1] = 1;
shape1[2] = hh;
shape1[3] = ww;

hh = testImg.rows; ww = testImg.cols;
shape2[0] = 1;
shape2[1] = 1;
shape2[2] = hh;
shape2[3] = ww;

startTime = std::chrono::steady_clock::now();
{
    // -------- Step 3. Create an Inference Request --------
    std::unique_ptr<ov::InferRequest>   inferRequest = std::make_unique<ov::InferRequest>(this->m_pModel->create_infer_request());
    ov::Tensor                          baseTensor(ov::element::f32, shape1);
    ov::Tensor                          testTensor(ov::element::f32, shape2);

    SetImgInTensor(baseImg, baseTensor);
    SetImgInTensor(testImg, testTensor);

    inferRequest->set_tensor("image0", baseTensor);
    inferRequest->set_tensor("image1", testTensor);

    std::cout << "The shape of output image0:" << baseTensor.get_shape() << std::endl;
    std::cout << "The shape of output image1:" << testTensor.get_shape() << std::endl;

    inferRequest->infer();

    ov::Tensor output = inferRequest->get_output_tensor(0);
    ov::Shape output_shape = output.get_shape();
    ov::Tensor output1 = inferRequest->get_output_tensor(1);
    ov::Shape output_shape1 = output1.get_shape();
    ov::Tensor output2 = inferRequest->get_output_tensor(2);
    ov::Shape output_shape2 = output2.get_shape();
    ov::Tensor output3 = inferRequest->get_output_tensor(3);
    ov::Shape output_shape3 = output2.get_shape();
    std::cout << "The shape of output kpts0:" << output_shape << std::endl;
    std::cout << "The shape of output kpts1:" << output_shape1 << std::endl;
    std::cout << "The shape of output matches0:" << output_shape2 << std::endl;
    std::cout << "The shape of output matches1:" << output_shape2 << std::endl;

}

Describe scenario use case

Application: Lightglue model for image registration, the input is a fixed-size image, the output is a dynamic shape. Openvino has a memory leak, I hope to solve this problem

@ZzhangYyan ZzhangYyan added the feature request request for unsupported feature or enhancement label Mar 25, 2025
@ZzhangYyan ZzhangYyan changed the title [Feature Request] 一个动态输入和动态输出的模型 例如lightglue。用openvino推理完毕后,会出现内存泄漏的情况。 [Feature Request] A model with dynamic input and dynamic output, such as Lightglue, will have a memory leak after inference with Openvino. Mar 25, 2025
@github-actions github-actions bot added the ep:OpenVINO issues related to OpenVINO execution provider label Mar 25, 2025
@ZzhangYyan ZzhangYyan changed the title [Feature Request] A model with dynamic input and dynamic output, such as Lightglue, will have a memory leak after inference with Openvino. [Feature Request] A model with dynamic input and dynamic output。 will have a memory leak after inference with Openvino. Mar 25, 2025
@yuslepukhin yuslepukhin removed the feature request request for unsupported feature or enhancement label Mar 25, 2025
@sfatimar
Copy link
Contributor

Hello are you using OpenVINO Directly. If so please raise issue in OpenVINO Repo https://github.com/openvinotoolkit/openvino/issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:OpenVINO issues related to OpenVINO execution provider
Projects
None yet
Development

No branches or pull requests

3 participants