Minolta may not be the first company that springs to mind when you think about cameras, but there was a time when Minolta was a leader in the field, going toe-to-toe with the likes of Nikon and Canon. Established in 1928, Minolta pioneered many features taken for granted today, and was perhaps the first company to incorporate AI into its cameras, as early as the late 1980s.
In 1977, Minolta created the first “multi mode” camera, allowing the user to select between manual, aperture, shutter, and “program override” modes. Incorporating a microprocessor, the Program mode deferred many exposure decisions to the camera, automating some of the decisions photographers previously made manually.

During the 1980s, Minolta continued to introduce new features to the market such as through-the-lens (TTL) exposure metering and in-camera autofocus motors. With microprocessor-controlled autofocus and autoexposure systems, photographers were empowered to capture shots they never could have before.
Another technological leap occurred in the late 1980s and early 1990s as Minolta began to incorporate artificial intelligence technology directly into lenses and the camera body. Minolta engineers implemented a fuzzy logic system, which allowed the camera to react to a wide variety of scenes, choosing appropriate camera settings faster than any human could.
Traditionally, computers are designed to deal with black-and-white, or binary logic: a value is a “1” or “0”, for example, with no exceptions. In the 1960s, mathematician and computer scientist Lotfi A. Zadeh proposed the theory of fuzzy logic, a form of many-valued logic that supported scenarios where the “truth” could lie somewhere between 0 and 1.
Zadeh recognized that people often make decisions based on imprecise and non-numerical information. His work in fuzzy logic permitted computer systems to solve problems within those vague environments, making educated guesses rather than being precise. This helped support early forms of artificial intelligence, including natural language processing.
By the 1990s, most cameras had incorporated microprocessors to assist in making exposure decisions. At its heart, cameras were fairly simple devices: a lens directed light onto a frame of light-sensitive film. But how much light hit that film determined whether a shot would be overexposed (too bright), underexposed (too dark), or perfectly exposed (just right).
There are three variables that determine exposure: aperture, shutter speed, and film speed. Lenses can be adjusted to allow more or less light through the glass (aperture). Single-lens-reflex cameras incorporate a shutter that briefly flips open to allow the light to hit the film; how long the shutter is open is called shutter speed. And film could be manufactured to respond more or less quickly to light, with the reaction time indicated through a measurement known as ISO. ISO 400 film was slower to react to light than ISO 1600 film, but higher-speed films became grainier, losing detail.
Unlike today’s digital cameras, which can simulate different levels of light sensitivity from shot to shot, with film-based cameras, the decision of film speed was locked in each time the photographer put a roll of film into the camera. But aperture and shutter speed were variables that could be controlled from shot to shot, to help dial in the correct exposure.
Complicating matters, aperture and shutter speed don’t just affect exposure. Aperture has a direct effect on field of view, which is the “depth” of an image that the camera puts in sharp focus. When the lens opening is adjusted to let in a minimal amount of light, a wide field of view is created: most of the photo will be razor sharp, from objects nearby to ones far in the distance. When the lens is opened all the way to let in maximum light, the depth of view narrows. Only a sliver of the scene might be in focus.
Fast shutter speeds can freeze action, while slower shutter speeds can cause moving objects to be blurry.
So there are quite a few variables at play when shooting a photo, making every snapshot a potential set of compromises. You – or the camera’s automated systems – can tinker with aperture and shutter speed to dial in a properly exposed shot, but manipulating these variables can have other effects, that may or may not be desirable based on the creative preferences of the photographer.
Add then there’s focus, too. Lens elements can be manipulated to focus on objects that are nearer or closer to the camera body, allowing a subject to be sharp, for example, while the background blurs.

With all these variables at play, it’s easy to understand why Minolta was attracted to fuzzy logic. In their “Program” mode, they wanted to empower photographers to hit the shutter button and have the camera select the right aperture, shutter speed, and focus to get an ideal shot every time, even in very dynamic scenes, with lots of motion.
And there are an infinite variety of scenes that could appear in front of a camera. One scene could feature mountains in the distance with fog drifting through them, while another could highlight a baseball bat striking a ball on a sun-draped field.
Minolta’s engineers defined fuzzy sets for photographic parameters such as sharpness, subject contrast, and light intensity. They categorized these into concepts such as “near,” “far,” “bright,” and “dim.” They considered how photographers would react to these parameters, and created fuzzy rules to mimic the choices an expert photographer would make.
Inputs to the fuzzy logic came from a wide variety of sensors that detected distance to subject, subject speed, light levels, and color temperature. Data from these sensors was “fuzzified,” converting it into degrees of membership within the fuzzy sets.
Going through an inference engine, the fuzzified inputs became fuzzified outputs, which ultimately translated into specific settings for the camera’s electrical and mechanical systems. And this process could occur in real-time, continuously adapting to the evolving scene in front of the lens.
The imprecise nature of photography meant that the camera wouldn’t always land on the ideal settings for every possible photograph, but in practice, it did very well most of the time, and could make these decisions far faster than a human. Fuzzy logic helped the camera simulate the decisions that a pro photographer would make for a given set of input parameters.
In another somewhat unique twist, the xi series of cameras supported power zoom lenses, a feature more commonly found on dedicated video cameras. By rotating the barrel on the lens, the photographer could zoom in and out of a scene. Minolta also extended fuzzy logic to the motorized zoom lenses. When the Auto Standby Zoom (ASZ) option was enabled and the camera was shooting in full auto mode, the camera would automatically zoom in to achieve what it estimated was an optimal cropping factor after identifying and analyzing a likely subject within the frame.
The Maxxum 9xi became Minolta’s flagship professional camera in 1992, followed by the Maxxum 9 in 1998. Unfortunately, by the early 2000s, Minolta was struggling to compete with competitors such as Canon and Nikon. Minolta and Konica merged in 2004, and in 2006, Konica Minolta exited from the film and digital camera market, ending Minolta’s 78-year run as a camera manufacturer.
Fast forward to today.
Every person walking around with a modern smartphone has a very capable camera in their pocket, and photographic decisions are being made by algorithms to degrees unimaginable just two decades ago. AI is being used not only to select basic settings such as aperture and shutter speed, but to automatically detect subjects in the frame, focusing on their eyes and tracking them in three dimensional space as they move. It may also (controversially) be being used to manipulate photos, creating an artificial reality that didn’t exist in real life.
That’s a subject that we’ll explore in more depth in a future post.
