|  | 
| 93 | 93 | ### May 14, 2024 | 
| 94 | 94 | * Support loading PaliGemma jax weights into SigLIP ViT models with average pooling. | 
| 95 | 95 | * Add Hiera models from Meta (https://github.com/facebookresearch/hiera). | 
| 96 |  | -* Add `normalize=` flag for transorms, return non-normalized torch.Tensor with original dytpe (for `chug`) | 
|  | 96 | +* Add `normalize=` flag for transforms, return non-normalized torch.Tensor with original dytpe (for `chug`) | 
| 97 | 97 | * Version 1.0.3 release | 
| 98 | 98 | 
 | 
| 99 | 99 | ### May 11, 2024 | 
|  | 
| 125 | 125 | ### April 11, 2024 | 
| 126 | 126 | * Prepping for a long overdue 1.0 release, things have been stable for a while now. | 
| 127 | 127 | * Significant feature that's been missing for a while, `features_only=True` support for ViT models with flat hidden states or non-std module layouts (so far covering  `'vit_*', 'twins_*', 'deit*', 'beit*', 'mvitv2*', 'eva*', 'samvit_*', 'flexivit*'`) | 
| 128 |  | -* Above feature support achieved through a new `forward_intermediates()` API that can be used with a feature wrapping module or direclty. | 
|  | 128 | +* Above feature support achieved through a new `forward_intermediates()` API that can be used with a feature wrapping module or directly. | 
| 129 | 129 | ```python | 
| 130 | 130 | model = timm.create_model('vit_base_patch16_224') | 
| 131 | 131 | final_feat, intermediates = model.forward_intermediates(input) | 
| @@ -360,7 +360,7 @@ Datasets & transform refactoring | 
| 360 | 360 | * 0.8.15dev0 | 
| 361 | 361 | 
 | 
| 362 | 362 | ### Feb 20, 2023 | 
| 363 |  | -* Add 320x320 `convnext_large_mlp.clip_laion2b_ft_320` and `convnext_lage_mlp.clip_laion2b_ft_soup_320` CLIP image tower weights for features & fine-tune | 
|  | 363 | +* Add 320x320 `convnext_large_mlp.clip_laion2b_ft_320` and `convnext_large_mlp.clip_laion2b_ft_soup_320` CLIP image tower weights for features & fine-tune | 
| 364 | 364 | * 0.8.13dev0 pypi release for latest changes w/ move to huggingface org | 
| 365 | 365 | 
 | 
| 366 | 366 | ### Feb 16, 2023 | 
| @@ -745,7 +745,7 @@ More models, more fixes | 
| 745 | 745 | * Add 'group matching' API to all models to allow grouping model parameters for application of 'layer-wise' LR decay, lr scale added to LR scheduler | 
| 746 | 746 | * Gradient checkpointing support added to many models | 
| 747 | 747 | * `forward_head(x, pre_logits=False)` fn added to all models to allow separate calls of `forward_features` + `forward_head` | 
| 748 |  | -* All vision transformer and vision MLP models update to return non-pooled / non-token selected features from `foward_features`, for consistency with CNN models, token selection or pooling now applied in `forward_head` | 
|  | 748 | +* All vision transformer and vision MLP models update to return non-pooled / non-token selected features from `forward_features`, for consistency with CNN models, token selection or pooling now applied in `forward_head` | 
| 749 | 749 | 
 | 
| 750 | 750 | ### Feb 2, 2022 | 
| 751 | 751 | * [Chris Hughes](https://github.com/Chris-hughes10) posted an exhaustive run through of `timm` on his blog yesterday. Well worth a read. [Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055) | 
| @@ -1058,7 +1058,7 @@ More models, more fixes | 
| 1058 | 1058 | * Add 'group matching' API to all models to allow grouping model parameters for application of 'layer-wise' LR decay, lr scale added to LR scheduler | 
| 1059 | 1059 | * Gradient checkpointing support added to many models | 
| 1060 | 1060 | * `forward_head(x, pre_logits=False)` fn added to all models to allow separate calls of `forward_features` + `forward_head` | 
| 1061 |  | -* All vision transformer and vision MLP models update to return non-pooled / non-token selected features from `foward_features`, for consistency with CNN models, token selection or pooling now applied in `forward_head` | 
|  | 1061 | +* All vision transformer and vision MLP models update to return non-pooled / non-token selected features from `forward_features`, for consistency with CNN models, token selection or pooling now applied in `forward_head` | 
| 1062 | 1062 | 
 | 
| 1063 | 1063 | ### Feb 2, 2022 | 
| 1064 | 1064 | * [Chris Hughes](https://github.com/Chris-hughes10) posted an exhaustive run through of `timm` on his blog yesterday. Well worth a read. [Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055) | 
|  | 
0 commit comments