Skip to content

Comments

tensor size >4G support for bwd/wrw#80

Open
carlushuang wants to merge 1 commit intomasterfrom
split_4G_bwd_wrw
Open

tensor size >4G support for bwd/wrw#80
carlushuang wants to merge 1 commit intomasterfrom
split_4G_bwd_wrw

Conversation

@carlushuang
Copy link
Collaborator

No description provided.


self._emit(f"s_mul_i32 s[{s.s_tmp()}], s[{s.s_by()}], s[{s.s_tmp(4)}]")
self._emit(f"s_mul_hi_u32 s[{s.s_tmp(1)}], s[{s.s_by()}], s[{s.s_tmp(4)}]")
self._emit(f"s_add_u32 s[{s.s_p_in()}], s[{s.s_p_in()}], s[{s.s_tmp()}]")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As range check is used, does s_p_in(2) and s_p_out(2) need to be modified here?


int splits = igemm_split_batch_size(n, wi, hi, 1, c, k, wo, ho, 1, data_byte);
assert(splits != 0);
n = n/splits; // split batch size here
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n is never used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants