-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Instance Methods to PyTorch Frontend #3612
Comments
Hey can you please elaborate on that. I mean, what does adding PyTorch instance method to ivy means. @jkeane508 Thanks. |
Hey @Anindyadeep , These are new frontend tasks to be completed, however there still needs to be some documentation drawn up and some examples set. Im forgot to convert this to draft, so I am going to close it for now and reopen when its ready :) Sorry for the confusion! |
@jkeane508 okay I get it. Thanks for updating. I was also looking for examples on YouTube about contributing and front end api examples. Ivy is doing really great and looking forward to contributing to this repo. |
please assign to me, so I can work on this issue |
@Aryan8912 Create a issue with the name of method you want to implement and comment the issue number here |
I have made a pull req for this can anybody check and merge it or inform me if any modification needed . |
I have created PR for this function also. |
Hi, @hirwa-nshuti can u please update the function that are already implement . Eg: device() |
Add Instance Methods to PyTorch Frontend:
_
Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.
_
new_tensor
new_full
new_empty
new_ones #5265
new_zeros
is_cuda #6639
is_quantized #16508
is_meta #19590
device
grad
ndim
real #8882
imag #14411
abs #5804
abs_ #5818
absolute #5819
absolute_ #5820
acos #6011
acos_
arccos
arccos_
add
add_
addbmm #14645
addbmm_ #14646
addcdiv
torch.Tensor.addcdiv_ #17195
addcmul #14983
addcmul_ #14984
addmm #14985
addmm_ #14986
sspaddmm
sspaddmm #27781
addmv #17485
addmv_ #20959
addr #15741
addr_ #15859
adjoint #19674
close
amax #5789
amin #5889
aminmax #9946
angle #19673
apply_ #18185
argmax #6556
argmin #7574
argsort #9267
argwhere #7859
asin #5812
asin_
arcsin #5814
arcsin_
torch.Tensor.as_strided #16267
atan
atan_
arctan #6020
arctan_ #6553
atan2 #7898
atan2_
arctan2 #10227
arctan2_ #10493
all #12996
any #9438
torch.Tensor.baddbmm #17001
baddbmm_ #20970
bernoulli #19438
bernoulli_ #22976
bfloat16
bincount #13643
bitwise_not
bitwise_not_ #17484
bitwise_and #6693
bitwise_and_
bitwise_or #9848
bitwise_or_ #12333
bitwise_xor #13007
bitwise_xor_ #21659
bitwise_left_shift
#21770_
bitwise_right_shift #14845
bitwise_right_shift_ #23768
bmm #20116
bool
byte
broadcast_to
cauchy_ #26790
ceil #7081
ceil_
char #21621
cholesky #17922
cholesky_inverse #26402
cholesky_solve
chunk
clamp #7751
clamp_
clip #13639
clip_ #13640
clone
contiguous
copy_ #21893
conj #19863
conj_physical
conj_physical_
resolve_conj
resolve_neg
copysign #15746
copysign_
cos #5739
cos_ #6068
cosh
cosh_
corrcoef #26843
count_nonzero #13890
cov #26453
acosh
acosh_
arccosh #14131
arccosh_ #14138
cpu #21086
cross
cuda #23059
cummax #22325
cummin #22326
cumprod
#13055_
cumsum
cumsum_ #10231
chalf
cfloat
cdouble
data_ptr
deg2rad #6077
dequantize
det
dense_dim
detach #6315
detach_ #15451
torch.Tensor.diag #17077
diag_embed
diagflat #21823
diagonal #20855
diagonal_scatter
fill_ diagonal_
fmax #20123
fmin #13164
diff #23418
digamma
digamma_
dim
dist #23679
div
div_
divide #21752
dot #18253
double #21148
dsplit #10784
eig #23690
element_size
equal #15849
eq_ #16246
equal #15849
erf #16257
erf_ #21739
erfc #26880
erfc_ #27117
erfinv
erfinv_
exp #13158
exp_ #14655
expm1 #15805
expm1_ #21743
expand
expand_as
exponential_
fix #14304
fix_ #14305
fill_
flatten
flip
fliplr #12782
flipud
float
float_power #26551
float_power_
floor #6510
torch.Tensor.floor_ #16999
floor_divide #9512
floor_divide_
fmod #15063
torch.Tensor.fmod_ #14946
frac #26607
frac_
frexp
gather #6318
gcd #20951
gcd_ #21748
ge
ge_
#16266equal_
geometric_
geqrf
ger
get_device
gt
greater_ #16266
greater #15993
greater_ #16266
half
hardshrink
heaviside #17515
histc
histogram
hsplit #10785
hypot
hypot_
i0
i0_
igamma
igamma_
igammac
igammac_
index_add_ #13950
index_add #13951
index_copy_
index_copy
#21558_
index_fill #21558
index_put_
index_put
index_reduce_
index_reduce
index_select
indices
inner
int
int_repr
inverse
isclose
isfinite #21943
isinf #17434
isposinf
isneginf
isnan #21554
is_contiguous
is_complex #15732
is_conj
is_floating_point #26256
is_inference
is_leaf #16634
is_pinned
is_set_to
is_shared
is_signed
is_sparse
istft
isreal #21267
item
kthvalue
lcm #21624
lcm_ #21625
ldexp
ldexp_
le
le_
less_equal
less_equal_
lerp
lerp_
lgamma
lgamma_
log #5806
log_
logdet #14666
log10 #14275
log10_ #17507
log1p #6676
log1p_ #21801
log2
log2_ #21567
#21891normal_
logaddexp #19620
logaddexp2 #21803
logcumsumexp
logsumexp
logical_and #13094
logical_and_ #22133
logical_not #13957
logical_not_
logical_or #13589
logical_or_ #22705
logical_xor #27451
logical_xor_
logit #24133
logit_
long
lstsq
lt
lt_
less
less_
lu
lu_solve
as_subclass
map_
masked_scatter_
masked_scatter
masked_fill_
masked_fill
masked_select
matmul #6557
matrix_power #24141
matrix_exp
max #5836
maximum
mean
nanmean #15764
median #11333
nanmedian
min #5826
minimum #26254
mm
smm
mode #27219
torch.Tensor.movedim #17190
moveaxis
msort #22245
mul
mul_
multiply #14573
multiply_ #21806
multinomial
mv
mvlgamma
mvlgamma_
nansum #25679
torch.Tensor.narrow #16273
narrow_copy
ndimension
nan_to_num
nan_to_num_
ne
ne_
not_equal #15699
not_equal_
neg #7509
neg_ #21915
negative #21714
negative_ #23760
nelement
nextafter
nextafter_
nonzero
norm
normal_
numel
numpy #9344
orgqr
ormqr
outer
permute
pin_memory
pinverse
polygamma
polygamma_
positive #26281
pow #6827
pow_
prod
put_
qr
qscheme
quantile #21772
nanquantile
q_scale
q_zero_point
q_per_channel_scales
q_per_channel_zero_points
q_per_channel_axis
rad2deg #26304
random_ #22013
ravel #13620
reciprocal #6513
reciprocal_ #22035
record_stream
register_hook
torch.Tensor.remainder #14496
remainder_ #22021
renorm
renorm_
repeat
repeat_interleave
requires_grad
requires_grad_ #16636
reshape
reshape_as
resize_
resize_as_
retain_grad
retains_grad
roll
rot90
round
round_ #21619
rsqrt
sqrt_ #21615
scatter
scatter_
scatter_add_
scatter_add
scatter_reduce_
scatter_reduce
select
select_scatter
set_
share_memory_
short #14779
sigmoid
sigmoid_
sign #14287
sign_ #17511
signbit
sgn
sgn_
sin #5040
sin_ #5041
sinc #21142
sinc_
sinh
sinh_
asinh #5723
asinh_ #5771
arcsinh #14205
arcsinh_ #21824
size
slogdet
slice_scatter #22584
sort #7798
split #11153
sparse_mask
sparse_dim
sqrt
sqrt_ #12731
square #14048
square_ #21826
squeeze
squeeze_ #15844
std #14994
stft
storage_type
stride
sub
subtract_ #13992
subtract
subtract_ #13992
sum #7402
sum_to_size
svd #20854
swapaxes
swapdims
symeig
torch.Tensor.t #13553
t_
tensor_split #11154
tile
to
to_mkldnn
take
take_along_dim
tan
tan_
tanh #6743
tanh_ #6744
atanh
atanh_
arctanh
arctanh_
tolist
topk
to_dense
to_sparse
to_sparse_csr
to_sparse_csc
to_sparse_bsr
to_sparse_bsc
trace
transpose
transpose_
triangular_solve
tril
tril_ #19502
triu #21607
triu_ #21609
true_divide #21950
true_divide_ #21952
trunc #14302
trunc_ #14303
type
type_as
unbind
unflatten #25905
unfold
uniform_
unique
cummax #22325
unsqueeze
unsqueeze_
values
var
vdot #26282
view
view_as
vsplit #10786
where
xlogy #21869
xlogy_ #21868
zero_ #21222
backward #15714
The main file paths where these functions are likely to be added are:
ivy/functional/frontends/torch/Tensor.py
ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_tensor.py
ivy/functional/frontends/torch/tensor.py
ivy/functional/frontends/torch/indexing\_slicing\_joining\_mutating\_ops.py
ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_indexing\_slicing\_joining\_mutating\_ops.py
ivy/functional/frontends/torch/pointwise\_ops.py
ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_pointwise\_ops.py
ivy\_tests/test\_ivy/test\_functional/test\_experimental/test\_core/test\_manipulation.py
ivy/functional/frontends/torch/\_\_init\_\_.py
ivy/functional/frontends/torch/creation\_ops.py
The text was updated successfully, but these errors were encountered: