-
Notifications
You must be signed in to change notification settings - Fork 233
fix: fix storage issue in torchtensor class #1833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to add tests showing that the original issue is fixed
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1833 +/- ##
==========================================
- Coverage 85.05% 85.02% -0.03%
==========================================
Files 135 135
Lines 9031 9035 +4
==========================================
+ Hits 7681 7682 +1
- Misses 1350 1353 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hey @ai-naymul , We need you to apply black formatting to your code and to sign-off your commits. |
d956a97
to
c92bca0
Compare
hey @ai-naymul, Now you should also apply the sign off and black in this PR? |
Hey @ai-naymul , Now only DCO is missing |
Can you di the signoff then? |
Signed-off-by: Naymul Islam <[email protected]>
Signed-off-by: Naymul Islam <[email protected]>
Signed-off-by: Naymul Islam <[email protected]>
Signed-off-by: Naymul Islam <[email protected]>
58c0cfe
to
48b0f32
Compare
Fixed that.Thanks a lot for your patience |
Thanks a lot for the contribution! |
It's a great pleasure to be a part of Jina. |
assert original_tensor_float is not copied_tensor_float | ||
assert torch.equal(original_tensor_int, copied_tensor_int) | ||
assert original_tensor_int is not copied_tensor_int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test is not enough. You should rather do
assert original_tensor_float.data_ptr() != copied_tensor_float.data_ptr()
indeed in pytorch each view of a tensor would have a different python ID but what matter is the underlying storage.
Good PR otherwise !!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay let me fix that..!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test is not enough. You should rather do
assert original_tensor_float.data_ptr() != copied_tensor_float.data_ptr()
indeed in pytorch each view of a tensor would have a different python ID but what matter is the underlying storage.
Good PR otherwise !!
Hi @samsja
Accidently I deleted the repo from my local machine that's why I had to clone it again and regarding that I am unable to change in this merged pull request, so for to reafctor the test I need to create another PR.
So should I create a another PR regarding that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes you would have to create another PR for this
Issue no : #1831
The primary purpose of adding the deepcopy method is to provide a more controlled and tailored approach to deep copying instances of the TorchTensor class. This helps prevent any unexpected behaviors arising from shared references to the same memory.