Skip to content

fix: fix storage issue in torchtensor class #1833

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 9, 2023

Conversation

ai-naymul
Copy link
Contributor

Issue no : #1831

The primary purpose of adding the deepcopy method is to provide a more controlled and tailored approach to deep copying instances of the TorchTensor class. This helps prevent any unexpected behaviors arising from shared references to the same memory.

Copy link
Member

@JoanFM JoanFM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add tests showing that the original issue is fixed

Copy link

codecov bot commented Dec 3, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (522811f) 85.05% compared to head (48b0f32) 85.02%.
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1833      +/-   ##
==========================================
- Coverage   85.05%   85.02%   -0.03%     
==========================================
  Files         135      135              
  Lines        9031     9035       +4     
==========================================
+ Hits         7681     7682       +1     
- Misses       1350     1353       +3     
Flag Coverage Δ
docarray 85.02% <100.00%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@JoanFM
Copy link
Member

JoanFM commented Dec 3, 2023

Hey @ai-naymul ,

We need you to apply black formatting to your code and to sign-off your commits.

@ai-naymul ai-naymul force-pushed the fix-torchtensor-storage-issue branch from d956a97 to c92bca0 Compare December 4, 2023 17:50
@JoanFM
Copy link
Member

JoanFM commented Dec 4, 2023

hey @ai-naymul,

Now you should also apply the sign off and black in this PR?

@JoanFM
Copy link
Member

JoanFM commented Dec 7, 2023

Hey @ai-naymul ,

Now only DCO is missing

@JoanFM
Copy link
Member

JoanFM commented Dec 9, 2023

Hey @ai-naymul ,

Now only DCO is missing

Can you di the signoff then?

@ai-naymul ai-naymul force-pushed the fix-torchtensor-storage-issue branch from 58c0cfe to 48b0f32 Compare December 9, 2023 16:37
@ai-naymul
Copy link
Contributor Author

Hey @ai-naymul ,
Now only DCO is missing

Can you di the signoff then?

Fixed that.Thanks a lot for your patience

@JoanFM JoanFM linked an issue Dec 9, 2023 that may be closed by this pull request
6 tasks
@JoanFM JoanFM merged commit 3cfa0b8 into docarray:main Dec 9, 2023
@JoanFM
Copy link
Member

JoanFM commented Dec 9, 2023

Thanks a lot for the contribution!

@ai-naymul
Copy link
Contributor Author

Thanks a lot for the contribution!

It's a great pleasure to be a part of Jina.

Comment on lines +43 to +45
assert original_tensor_float is not copied_tensor_float
assert torch.equal(original_tensor_int, copied_tensor_int)
assert original_tensor_int is not copied_tensor_int
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test is not enough. You should rather do

assert original_tensor_float.data_ptr() != copied_tensor_float.data_ptr()

indeed in pytorch each view of a tensor would have a different python ID but what matter is the underlying storage.

Good PR otherwise !!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay let me fix that..!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test is not enough. You should rather do

assert original_tensor_float.data_ptr() != copied_tensor_float.data_ptr()

indeed in pytorch each view of a tensor would have a different python ID but what matter is the underlying storage.

Good PR otherwise !!

Hi @samsja
Accidently I deleted the repo from my local machine that's why I had to clone it again and regarding that I am unable to change in this merged pull request, so for to reafctor the test I need to create another PR.
So should I create a another PR regarding that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes you would have to create another PR for this

@JoanFM JoanFM mentioned this pull request Dec 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TorchTensor deepcopy raises error when dtype is not float32
3 participants