Murthy, S. Murugeswari and Vanathi, A. and Kalaiyarasi, D. and Usha, S. and Saranya, D. (2023) Deep in memory architectures learning of Trade-Offs for productivity. In: UNSPECIFIED.
Full text not available from this repository.Abstract
On several image comprehension activities, convolutional neural networks achieve approaching precision. As a result, they're constantly being used to combine objects and organized data in multimodal data analytics. Transfer learning was becoming common because teaching deep CNNs to scratch is costly. By employing the CNN (and several other technologies), one "reads off" a layer of Deep cnn and then combines these characteristics with numerous additional downstream ML applications to help users perform a task. No layer can have the highest rated accuracy, which demands the use of many layers to get a comparable degree of precision. This methodology is inefficient because of the reiterative CNN induction and possible process failures owing to resource management difficulties.. Vista is the first data structure to address these problems by boosting the function migration workload to a propositional level and formalizing the CNN estimation data model. Vista allows for automatic function materialization commerce management, shared data storage, and mode of governance optimization. Experiments in the real world reveal that Vista significantly increases device stability and decreases runtimes in addition to allowing smooth feature conversion. © 2024 Elsevier B.V., All rights reserved.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| Subjects: | Computer Science > Computer Science |
| Divisions: | Engineering and Technology > Aarupadai Veedu Institute of Technology, Chennai > Electronics & Communication Engineering |
| Depositing User: | Unnamed user with email techsupport@mosys.org |
| Last Modified: | 01 Dec 2025 05:18 |
| URI: | https://vmuir.mosys.org/id/eprint/2428 |
Dimensions
Dimensions