• Hi!
    I'm Jonathan

    Being interested in everything and passionate about trying new things.

    履歷下載

  • I'm
    an
    AI Engineer

    Major in Electical Engineering, and my research area is Computer Vision.

    查看 Github

關於

大家好!我叫范植貿,你也可以叫我阿貿。 畢業於國立中興大學電機系碩士學位,主修計算機科學、電腦視覺 與 人工智慧。

目前於台灣台北華碩 的 數位影像技術部 擔任 AI研發工程師。 我對任何新鮮有趣的事物都非常有興趣,並熱衷學習,渴望參與接觸沒學過的新技術。

如果你想知道關於我的更多經歷與著作,可以參考下面資訊~!

程式能力

C/C++

80%

C#

70%

Python

90%

JavaScript

60%

Html

50%

CSS

50%

學業經歷

碩士畢業於國立中興大學電機工程學系,主修電腦視覺與人工智慧,研究領域為影像修復任務。 期間取得傑出的在校成績 GPA: 4.13/4.15,且參與多個深度學習程式競賽,包括: 資料分析、物件偵測與影像分割等。 並且發表 4 篇論文並被接受與發表至 ICIP、EUSIPCO 與 ISCAS 等國際會議,且碩士論文被選為 2022 IEEE 台北分會最佳碩博士論文獎。 期望能不斷精進自己程式能力以解決現實生活中所遇到的問題。

大學畢業於元智大學電機工程學系,主修電子、電路與電磁學,也包含基礎的程式課程包括: 資料結構、C語言等。 畢業專題針對 Android 手機應用程式開發,利用了 JavaScript,SQL 與 socket連線等技術,也藉有此專題發現自己相較於偏向硬體的技術,更喜歡軟體的開發。

工作經歷

AI R&D 工程師 2022 - now

於華碩的 DIT部門擔任 AI研發工程師,除了需要對於電腦視覺的AI演算法有一定程度理解之外,也須結合前端或後端的程式能力以建構解決方案的應用程式。

競賽經歷

音訊轉譜 (2022)

整合音源分離與音訊轉錄的AI模型應用程式,將輸入的音樂轉換成樂譜。 取得 Intel DevCUP 第三名。

Github

蘭花分類 (2022)

利用不同合奏增強方式,整合7種不同的分類模型以預測蘭花種類。 於全國 743 組參賽隊伍中得到第 18 名的成績。

Github

資料回歸 (2021)

利用多層感知器預測工業資料,並於現場根據新提供的訓練資料即時訓練預測模型。 從 118 組中入圍決賽。

Github

水稻偵測 (2021)

利用物件偵測模型 (YOLOv4)偵測定位水道空拍圖。 在全國報名隊伍 523 組中取得前10名 (9/523).

Github

論文著作

利用可選擇性殘差塊在改良式 階層編解碼器網路實現影像修復
在這篇論文當中,我們將基於 輕量型的階層式網路架構: U-Net 為基底,並改良自影像修復任務中效果很 好,但需消耗較大記憶體容量的殘差密集塊(RDB),成為一種效率更高、且 不會占據過多顯存的模塊稱為選擇性殘差塊(SRB)。 我們還改良了階層式網 路架構 U-Net,增加了門柱特徵路徑,稱為 M-Net+。 我們提出的 M-Net+相 較於傳統 U-Net,可以獲取更豐富的空間特徵資訊,並與 SRB 作結合達到相 輔相成的效果。 除此之外,我們還提出了基於影像修復中相當重要的兩個 評估指標: 峰值訊雜比(PSNR)與結構相似性(SSIM)的損失函數來優化我們 網路模型。 最終我們提出的網路架構適用於9種不同的影像修復任務>中的去噪、去模糊、 去雨、去霧與低光源的影像增強,並在定量指標與視覺質量上取得了非常不錯的成績。 此篇論文被選為 IEEE 2022 台北分會最佳碩博士論文獎🎉

Half Wavelet Attention on M-Net+ for Low-Light Image Enhancement
Low-Light Image Enhancement is a computer vision task which intensifies the dark images to appropriate brightness. It can also be seen as an illposed problem in image restoration domain. With the success of deep neural networks, the convolutional neural networks surpass the traditional algorithm-based methods and become the mainstream in the computer vision area. To advance the performance of enhancement algorithms, we propose an image enhancement network (HWMNet) based on an improved hierarchical model: M-Net+. Specifically, we use a half wavelet attention block on M-Net+ to enrich the features from wavelet domain. Furthermore, our HWMNet has competitive performance results on two image enhancement datasets in terms of quantitative metrics and visual quality.

Improved Hierarchical M-Net+ for Blind Image Denoising
Image denoising is a long standing ill-posed prob-lem. Recently, the convolution neural networks (CNNs) gradually stand in the spotlight and almost dominated the computer vision field and had achieved impressive results in different levels of vision tasks. One of famous hierarchical CNN-backbones is the U-Net which shows awesome performance in both denoising and other areas of computer vision. However, the hierarchical architecture usually suffers from the loss of spatial information due to the repeated sampling. It seriously affects the denoising performance especially the element-wise task like denoising. In this paper, we proposed an improved hierarchical backbone: M-Net+ for image denoising to ameliorate the loss of spatial details. Furthermore, we test on two synthetic Gaussian noise datasets to demonstrate the competitive result of our model.

Selective Residual M-Net for Real Image Denoising
Image restoration is a low-level vision task which is to restore degraded images to noise-free images. With the success of deep neural networks, the convolutional neural networks surpass the traditional restoration methods and become the main-stream in the computer vision area. To advance the performance of denoising algorithms, we propose a blind real image denoising network (SRMNet) by employing a hierarchical architecture improved from U-Net. Specifically, we use a selective kernel with residual block on the hierarchical structure called M-Net to enrich the multi-scale semantic information. Furthermore, our SRMNet has competitive performance results on two synthetic and two real-world noisy datasets in terms of quantitative metrics and visual quality. The source code and pretrained model are available at https://github.com/FanChiMao/SRMNet.

SUNet: Swin Transformer UNet for Image Denoising
Image restoration is a challenging ill-posed problem which also has been a long-standing issue. In the past few years, the convolution neural networks (CNNs) almost dominated the computer vision and had achieved considerable success in different levels of vision tasks including image restoration. However, recently the Swin Transformer-based model also shows impressive performance, even surpasses the CNN-based methods to become the state-of-the-art on high-level vision tasks. In this paper, we proposed a restoration model called SUNet which uses the Swin Transformer layer as our basic block and then is applied to UNet architecture for image denoising.

WBTP-VTON: Whole Body and Texture Preservation based Virtual Try-On Network
Image-based virtual clothes try-on systems are becoming more and more popular. However, many challenges are waiting to be solved. Therefore, we propose a new fully learnable method, called the whole body and texture preservation based virtual try-on network (WBTP-VTON) to guide the virtual attempt to deal with all practical challenges in this area. First, the WBTP-VTON template conversion is used to transform the target clothing and pants (or skirts) according to the body shape of the target person using a method called Geometric Matching Module (GMM). The second part is to synthesize the final image and make the generated results more realistic. Finally, we use try-on modules and synthetic masks to combine the deformed clothes and the final image to ensure image smoothness. After experimenting on a large data set, it is proved that our WBTP-VTON method has advanced virtual try-on performance.

Compound Multi-branch Feature Fusion for Real Image Restoration
Image restoration is a challenging and ill-posed problem which also has been a long-standing issue. However, most of learning based restoration methods are proposed to target one degradation type which means they are lack of generalization. In this paper, we proposed a multi-branch restoration model inspired from the Human Visual System (i.e., Retinal Ganglion Cells) which can achieve multiple restoration tasks in a general framework. The experiments show that the proposed multi-branch architecture, called CMFNet, has competitive performance results on four datasets, including image deblurring, dehazing and deraindrop which are very common applications for autonomous cars.